=== RUN TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run: out/minikube-linux-amd64 start -p multinode-859606 --wait=true -v=8 --alsologtostderr --driver=kvm2
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-859606 --wait=true -v=8 --alsologtostderr --driver=kvm2 : exit status 90 (1m24.715053505s)
-- stdout --
* [multinode-859606] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17764
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting control plane node multinode-859606 in cluster multinode-859606
* Restarting existing kvm2 VM for "multinode-859606" ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
* Configuring CNI (Container Networking Interface) ...
* Enabled addons:
* Verifying Kubernetes components...
* Starting worker node multinode-859606-m02 in cluster multinode-859606
* Restarting existing kvm2 VM for "multinode-859606-m02" ...
* Found network options:
- NO_PROXY=192.168.39.40
-- /stdout --
** stderr **
I1212 00:36:19.566152 104530 out.go:296] Setting OutFile to fd 1 ...
I1212 00:36:19.566265 104530 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:19.566273 104530 out.go:309] Setting ErrFile to fd 2...
I1212 00:36:19.566277 104530 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:19.566462 104530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
I1212 00:36:19.566987 104530 out.go:303] Setting JSON to false
I1212 00:36:19.567880 104530 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11880,"bootTime":1702329500,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1212 00:36:19.567966 104530 start.go:138] virtualization: kvm guest
I1212 00:36:19.570536 104530 out.go:177] * [multinode-859606] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I1212 00:36:19.572060 104530 notify.go:220] Checking for updates...
I1212 00:36:19.572071 104530 out.go:177] - MINIKUBE_LOCATION=17764
I1212 00:36:19.573648 104530 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1212 00:36:19.575043 104530 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
I1212 00:36:19.576502 104530 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
I1212 00:36:19.578073 104530 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1212 00:36:19.579463 104530 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1212 00:36:19.581288 104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:36:19.581767 104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:36:19.581821 104530 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:36:19.596096 104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
I1212 00:36:19.596488 104530 main.go:141] libmachine: () Calling .GetVersion
I1212 00:36:19.597060 104530 main.go:141] libmachine: Using API Version 1
I1212 00:36:19.597091 104530 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:36:19.597481 104530 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:36:19.597646 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:19.597948 104530 driver.go:392] Setting default libvirt URI to qemu:///system
I1212 00:36:19.598247 104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:36:19.598293 104530 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:36:19.612639 104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
I1212 00:36:19.613044 104530 main.go:141] libmachine: () Calling .GetVersion
I1212 00:36:19.613494 104530 main.go:141] libmachine: Using API Version 1
I1212 00:36:19.613515 104530 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:36:19.613814 104530 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:36:19.613998 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:19.648526 104530 out.go:177] * Using the kvm2 driver based on existing profile
I1212 00:36:19.650074 104530 start.go:298] selected driver: kvm2
I1212 00:36:19.650086 104530 start.go:902] validating driver "kvm2" against &{Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false ku
beflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1212 00:36:19.650266 104530 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1212 00:36:19.650710 104530 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:36:19.650794 104530 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17764-80294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1212 00:36:19.664949 104530 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
I1212 00:36:19.665848 104530 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1212 00:36:19.665938 104530 cni.go:84] Creating CNI manager for ""
I1212 00:36:19.665955 104530 cni.go:136] 2 nodes found, recommending kindnet
I1212 00:36:19.665965 104530 start_flags.go:323] config:
{Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false
nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1212 00:36:19.666224 104530 iso.go:125] acquiring lock: {Name:mk9f395cbf4246894893bf64341667bb412992c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:36:19.668183 104530 out.go:177] * Starting control plane node multinode-859606 in cluster multinode-859606
I1212 00:36:19.669663 104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I1212 00:36:19.669706 104530 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
I1212 00:36:19.669717 104530 cache.go:56] Caching tarball of preloaded images
I1212 00:36:19.669796 104530 preload.go:174] Found /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1212 00:36:19.669808 104530 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I1212 00:36:19.669923 104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
I1212 00:36:19.670107 104530 start.go:365] acquiring machines lock for multinode-859606: {Name:mk381e91746c2e5b8a4620fe3fd447d80375e413 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1212 00:36:19.670157 104530 start.go:369] acquired machines lock for "multinode-859606" in 32.405µs
I1212 00:36:19.670175 104530 start.go:96] Skipping create...Using existing machine configuration
I1212 00:36:19.670183 104530 fix.go:54] fixHost starting:
I1212 00:36:19.670424 104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:36:19.670455 104530 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:36:19.684474 104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
I1212 00:36:19.684891 104530 main.go:141] libmachine: () Calling .GetVersion
I1212 00:36:19.685333 104530 main.go:141] libmachine: Using API Version 1
I1212 00:36:19.685356 104530 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:36:19.685644 104530 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:36:19.685828 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:19.685946 104530 main.go:141] libmachine: (multinode-859606) Calling .GetState
I1212 00:36:19.687411 104530 fix.go:102] recreateIfNeeded on multinode-859606: state=Stopped err=<nil>
I1212 00:36:19.687443 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
W1212 00:36:19.687615 104530 fix.go:128] unexpected machine state, will restart: <nil>
I1212 00:36:19.689763 104530 out.go:177] * Restarting existing kvm2 VM for "multinode-859606" ...
I1212 00:36:19.691324 104530 main.go:141] libmachine: (multinode-859606) Calling .Start
I1212 00:36:19.691550 104530 main.go:141] libmachine: (multinode-859606) Ensuring networks are active...
I1212 00:36:19.692253 104530 main.go:141] libmachine: (multinode-859606) Ensuring network default is active
I1212 00:36:19.692574 104530 main.go:141] libmachine: (multinode-859606) Ensuring network mk-multinode-859606 is active
I1212 00:36:19.692847 104530 main.go:141] libmachine: (multinode-859606) Getting domain xml...
I1212 00:36:19.693505 104530 main.go:141] libmachine: (multinode-859606) Creating domain...
I1212 00:36:20.929419 104530 main.go:141] libmachine: (multinode-859606) Waiting to get IP...
I1212 00:36:20.930523 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:20.930912 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:20.931040 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:20.930906 104565 retry.go:31] will retry after 273.212272ms: waiting for machine to come up
I1212 00:36:21.205460 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:21.205872 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:21.205901 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.205852 104565 retry.go:31] will retry after 326.892458ms: waiting for machine to come up
I1212 00:36:21.534529 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:21.534921 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:21.534943 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.534891 104565 retry.go:31] will retry after 343.135816ms: waiting for machine to come up
I1212 00:36:21.879459 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:21.879929 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:21.879953 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.879870 104565 retry.go:31] will retry after 589.671783ms: waiting for machine to come up
I1212 00:36:22.471637 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:22.472097 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:22.472120 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:22.472073 104565 retry.go:31] will retry after 637.139279ms: waiting for machine to come up
I1212 00:36:23.110881 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:23.111236 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:23.111267 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:23.111178 104565 retry.go:31] will retry after 745.620292ms: waiting for machine to come up
I1212 00:36:23.858157 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:23.858677 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:23.858707 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:23.858634 104565 retry.go:31] will retry after 1.181130732s: waiting for machine to come up
I1212 00:36:25.041534 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:25.041972 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:25.042004 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:25.041923 104565 retry.go:31] will retry after 1.339637741s: waiting for machine to come up
I1212 00:36:26.383605 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:26.383992 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:26.384019 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:26.383923 104565 retry.go:31] will retry after 1.520765812s: waiting for machine to come up
I1212 00:36:27.906937 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:27.907387 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:27.907415 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:27.907357 104565 retry.go:31] will retry after 1.874600317s: waiting for machine to come up
I1212 00:36:29.783675 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:29.784134 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:29.784174 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:29.784075 104565 retry.go:31] will retry after 2.274077714s: waiting for machine to come up
I1212 00:36:32.061527 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:32.061959 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:32.061986 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:32.061913 104565 retry.go:31] will retry after 3.21102487s: waiting for machine to come up
I1212 00:36:35.274900 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:35.275327 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:35.275356 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:35.275295 104565 retry.go:31] will retry after 4.00191762s: waiting for machine to come up
I1212 00:36:39.281352 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.281835 104530 main.go:141] libmachine: (multinode-859606) Found IP for machine: 192.168.39.40
I1212 00:36:39.281858 104530 main.go:141] libmachine: (multinode-859606) Reserving static IP address...
I1212 00:36:39.281874 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has current primary IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.282305 104530 main.go:141] libmachine: (multinode-859606) Reserved static IP address: 192.168.39.40
I1212 00:36:39.282362 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "multinode-859606", mac: "52:54:00:16:26:7f", ip: "192.168.39.40"} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.282382 104530 main.go:141] libmachine: (multinode-859606) Waiting for SSH to be available...
I1212 00:36:39.282413 104530 main.go:141] libmachine: (multinode-859606) DBG | skip adding static IP to network mk-multinode-859606 - found existing host DHCP lease matching {name: "multinode-859606", mac: "52:54:00:16:26:7f", ip: "192.168.39.40"}
I1212 00:36:39.282430 104530 main.go:141] libmachine: (multinode-859606) DBG | Getting to WaitForSSH function...
I1212 00:36:39.284738 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.285057 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.285110 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.285169 104530 main.go:141] libmachine: (multinode-859606) DBG | Using SSH client type: external
I1212 00:36:39.285210 104530 main.go:141] libmachine: (multinode-859606) DBG | Using SSH private key: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa (-rw-------)
I1212 00:36:39.285247 104530 main.go:141] libmachine: (multinode-859606) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa -p 22] /usr/bin/ssh <nil>}
I1212 00:36:39.285259 104530 main.go:141] libmachine: (multinode-859606) DBG | About to run SSH command:
I1212 00:36:39.285268 104530 main.go:141] libmachine: (multinode-859606) DBG | exit 0
I1212 00:36:39.375522 104530 main.go:141] libmachine: (multinode-859606) DBG | SSH cmd err, output: <nil>:
I1212 00:36:39.375955 104530 main.go:141] libmachine: (multinode-859606) Calling .GetConfigRaw
I1212 00:36:39.376683 104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
I1212 00:36:39.379083 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.379448 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.379483 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.379735 104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
I1212 00:36:39.379953 104530 machine.go:88] provisioning docker machine ...
I1212 00:36:39.379970 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:39.380177 104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
I1212 00:36:39.380335 104530 buildroot.go:166] provisioning hostname "multinode-859606"
I1212 00:36:39.380350 104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
I1212 00:36:39.380483 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:39.382706 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.383084 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.383109 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.383231 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:39.383413 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.383548 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.383686 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:39.383852 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:39.384221 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:39.384236 104530 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-859606 && echo "multinode-859606" | sudo tee /etc/hostname
I1212 00:36:39.519767 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-859606
I1212 00:36:39.519800 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:39.522378 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.522790 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.522832 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.522956 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:39.523177 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.523364 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.523491 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:39.523659 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:39.523993 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:39.524011 104530 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-859606' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-859606/g' /etc/hosts;
else
echo '127.0.1.1 multinode-859606' | sudo tee -a /etc/hosts;
fi
fi
I1212 00:36:39.656285 104530 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1212 00:36:39.656370 104530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17764-80294/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-80294/.minikube}
I1212 00:36:39.656408 104530 buildroot.go:174] setting up certificates
I1212 00:36:39.656417 104530 provision.go:83] configureAuth start
I1212 00:36:39.656432 104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
I1212 00:36:39.656702 104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
I1212 00:36:39.659384 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.659735 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.659764 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.659868 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:39.662155 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.662517 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.662547 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.662670 104530 provision.go:138] copyHostCerts
I1212 00:36:39.662701 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
I1212 00:36:39.662745 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem, removing ...
I1212 00:36:39.662764 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
I1212 00:36:39.662840 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem (1078 bytes)
I1212 00:36:39.662932 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
I1212 00:36:39.662954 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem, removing ...
I1212 00:36:39.662963 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
I1212 00:36:39.662998 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem (1123 bytes)
I1212 00:36:39.663072 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
I1212 00:36:39.663106 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem, removing ...
I1212 00:36:39.663115 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
I1212 00:36:39.663149 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem (1679 bytes)
I1212 00:36:39.663211 104530 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem org=jenkins.multinode-859606 san=[192.168.39.40 192.168.39.40 localhost 127.0.0.1 minikube multinode-859606]
I1212 00:36:39.752771 104530 provision.go:172] copyRemoteCerts
I1212 00:36:39.752840 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1212 00:36:39.752864 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:39.755641 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.755981 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.756012 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.756148 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:39.756362 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.756505 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:39.756620 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
I1212 00:36:39.848757 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1212 00:36:39.848827 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1212 00:36:39.872145 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1212 00:36:39.872230 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1212 00:36:39.895524 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem -> /etc/docker/server.pem
I1212 00:36:39.895625 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I1212 00:36:39.919081 104530 provision.go:86] duration metric: configureAuth took 262.648578ms
I1212 00:36:39.919117 104530 buildroot.go:189] setting minikube options for container-runtime
I1212 00:36:39.919362 104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:36:39.919392 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:39.919652 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:39.922322 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.922662 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.922694 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.922873 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:39.923053 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.923205 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.923322 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:39.923479 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:39.923797 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:39.923808 104530 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1212 00:36:40.049654 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I1212 00:36:40.049683 104530 buildroot.go:70] root file system type: tmpfs
I1212 00:36:40.049826 104530 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1212 00:36:40.049854 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:40.052273 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:40.052615 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:40.052648 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:40.052798 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:40.053014 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:40.053178 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:40.053328 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:40.053470 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:40.053822 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:40.053890 104530 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1212 00:36:40.188800 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1212 00:36:40.188832 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:40.191559 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:40.191974 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:40.192007 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:40.192190 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:40.192371 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:40.192563 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:40.192665 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:40.192866 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:40.193267 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:40.193286 104530 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1212 00:36:41.206767 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1212 00:36:41.206800 104530 machine.go:91] provisioned docker machine in 1.826833328s
I1212 00:36:41.206817 104530 start.go:300] post-start starting for "multinode-859606" (driver="kvm2")
I1212 00:36:41.206830 104530 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1212 00:36:41.206852 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:41.207178 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1212 00:36:41.207202 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:41.209997 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.210348 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:41.210381 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.210498 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:41.210690 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:41.210833 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:41.210981 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
I1212 00:36:41.301876 104530 ssh_runner.go:195] Run: cat /etc/os-release
I1212 00:36:41.306227 104530 command_runner.go:130] > NAME=Buildroot
I1212 00:36:41.306246 104530 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
I1212 00:36:41.306250 104530 command_runner.go:130] > ID=buildroot
I1212 00:36:41.306262 104530 command_runner.go:130] > VERSION_ID=2021.02.12
I1212 00:36:41.306266 104530 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I1212 00:36:41.306469 104530 info.go:137] Remote host: Buildroot 2021.02.12
I1212 00:36:41.306487 104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/addons for local assets ...
I1212 00:36:41.306534 104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/files for local assets ...
I1212 00:36:41.306599 104530 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> 876092.pem in /etc/ssl/certs
I1212 00:36:41.306609 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /etc/ssl/certs/876092.pem
I1212 00:36:41.306693 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1212 00:36:41.315869 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /etc/ssl/certs/876092.pem (1708 bytes)
I1212 00:36:41.338667 104530 start.go:303] post-start completed in 131.83456ms
I1212 00:36:41.338691 104530 fix.go:56] fixHost completed within 21.668507657s
I1212 00:36:41.338718 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:41.341292 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.341664 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:41.341694 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.341888 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:41.342101 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:41.342241 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:41.342408 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:41.342541 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:41.342886 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:41.342902 104530 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1212 00:36:41.468622 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702341401.415199028
I1212 00:36:41.468653 104530 fix.go:206] guest clock: 1702341401.415199028
I1212 00:36:41.468663 104530 fix.go:219] Guest: 2023-12-12 00:36:41.415199028 +0000 UTC Remote: 2023-12-12 00:36:41.338694258 +0000 UTC m=+21.821939649 (delta=76.50477ms)
I1212 00:36:41.468688 104530 fix.go:190] guest clock delta is within tolerance: 76.50477ms
I1212 00:36:41.468695 104530 start.go:83] releasing machines lock for "multinode-859606", held for 21.798528151s
I1212 00:36:41.468721 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:41.469036 104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
I1212 00:36:41.471587 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.471996 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:41.472029 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.472196 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:41.472679 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:41.472871 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:41.472969 104530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1212 00:36:41.473018 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:41.473104 104530 ssh_runner.go:195] Run: cat /version.json
I1212 00:36:41.473135 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:41.475372 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.475531 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.475739 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:41.475765 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.475949 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:41.475965 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:41.475979 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.476148 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:41.476167 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:41.476322 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:41.476325 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:41.476507 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:41.476503 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
I1212 00:36:41.476677 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
I1212 00:36:41.586671 104530 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I1212 00:36:41.587519 104530 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
I1212 00:36:41.587648 104530 ssh_runner.go:195] Run: systemctl --version
I1212 00:36:41.593336 104530 command_runner.go:130] > systemd 247 (247)
I1212 00:36:41.593360 104530 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I1212 00:36:41.593423 104530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1212 00:36:41.598984 104530 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W1212 00:36:41.599019 104530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1212 00:36:41.599060 104530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1212 00:36:41.614960 104530 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I1212 00:36:41.614996 104530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1212 00:36:41.615008 104530 start.go:475] detecting cgroup driver to use...
I1212 00:36:41.615155 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:36:41.631749 104530 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I1212 00:36:41.632091 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I1212 00:36:41.642135 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1212 00:36:41.651964 104530 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1212 00:36:41.652033 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1212 00:36:41.661909 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:36:41.672216 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1212 00:36:41.681323 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:36:41.691358 104530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1212 00:36:41.701487 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1212 00:36:41.711473 104530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1212 00:36:41.720346 104530 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I1212 00:36:41.720490 104530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1212 00:36:41.729603 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:36:41.829613 104530 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1212 00:36:41.846807 104530 start.go:475] detecting cgroup driver to use...
I1212 00:36:41.846894 104530 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1212 00:36:41.859661 104530 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I1212 00:36:41.860603 104530 command_runner.go:130] > [Unit]
I1212 00:36:41.860621 104530 command_runner.go:130] > Description=Docker Application Container Engine
I1212 00:36:41.860629 104530 command_runner.go:130] > Documentation=https://docs.docker.com
I1212 00:36:41.860638 104530 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I1212 00:36:41.860648 104530 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I1212 00:36:41.860662 104530 command_runner.go:130] > StartLimitBurst=3
I1212 00:36:41.860671 104530 command_runner.go:130] > StartLimitIntervalSec=60
I1212 00:36:41.860679 104530 command_runner.go:130] > [Service]
I1212 00:36:41.860686 104530 command_runner.go:130] > Type=notify
I1212 00:36:41.860694 104530 command_runner.go:130] > Restart=on-failure
I1212 00:36:41.860715 104530 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I1212 00:36:41.860734 104530 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I1212 00:36:41.860748 104530 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I1212 00:36:41.860757 104530 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I1212 00:36:41.860767 104530 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I1212 00:36:41.860781 104530 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I1212 00:36:41.860791 104530 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I1212 00:36:41.860803 104530 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I1212 00:36:41.860812 104530 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I1212 00:36:41.860818 104530 command_runner.go:130] > ExecStart=
I1212 00:36:41.860837 104530 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I1212 00:36:41.860845 104530 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I1212 00:36:41.860854 104530 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I1212 00:36:41.860863 104530 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I1212 00:36:41.860867 104530 command_runner.go:130] > LimitNOFILE=infinity
I1212 00:36:41.860872 104530 command_runner.go:130] > LimitNPROC=infinity
I1212 00:36:41.860876 104530 command_runner.go:130] > LimitCORE=infinity
I1212 00:36:41.860881 104530 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I1212 00:36:41.860886 104530 command_runner.go:130] > # Only systemd 226 and above support this version.
I1212 00:36:41.860893 104530 command_runner.go:130] > TasksMax=infinity
I1212 00:36:41.860897 104530 command_runner.go:130] > TimeoutStartSec=0
I1212 00:36:41.860903 104530 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I1212 00:36:41.860907 104530 command_runner.go:130] > Delegate=yes
I1212 00:36:41.860912 104530 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I1212 00:36:41.860916 104530 command_runner.go:130] > KillMode=process
I1212 00:36:41.860921 104530 command_runner.go:130] > [Install]
I1212 00:36:41.860934 104530 command_runner.go:130] > WantedBy=multi-user.target
I1212 00:36:41.861408 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1212 00:36:41.875266 104530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1212 00:36:41.894559 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1212 00:36:41.907084 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:36:41.919502 104530 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1212 00:36:41.951570 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:36:41.963632 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:36:41.980713 104530 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I1212 00:36:41.980788 104530 ssh_runner.go:195] Run: which cri-dockerd
I1212 00:36:41.984334 104530 command_runner.go:130] > /usr/bin/cri-dockerd
I1212 00:36:41.984645 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1212 00:36:41.993852 104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1212 00:36:42.009538 104530 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1212 00:36:42.118265 104530 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1212 00:36:42.228976 104530 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I1212 00:36:42.229126 104530 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1212 00:36:42.245311 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:36:42.345292 104530 ssh_runner.go:195] Run: sudo systemctl restart docker
I1212 00:36:43.830127 104530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.484785426s)
I1212 00:36:43.830211 104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1212 00:36:43.943279 104530 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1212 00:36:44.053942 104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1212 00:36:44.164844 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:36:44.275934 104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1212 00:36:44.291963 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:36:44.392776 104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I1212 00:36:44.474244 104530 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1212 00:36:44.474311 104530 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1212 00:36:44.480515 104530 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I1212 00:36:44.480535 104530 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I1212 00:36:44.480541 104530 command_runner.go:130] > Device: 16h/22d Inode: 819 Links: 1
I1212 00:36:44.480548 104530 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I1212 00:36:44.480554 104530 command_runner.go:130] > Access: 2023-12-12 00:36:44.352977075 +0000
I1212 00:36:44.480559 104530 command_runner.go:130] > Modify: 2023-12-12 00:36:44.352977075 +0000
I1212 00:36:44.480564 104530 command_runner.go:130] > Change: 2023-12-12 00:36:44.355977075 +0000
I1212 00:36:44.480567 104530 command_runner.go:130] > Birth: -
I1212 00:36:44.480717 104530 start.go:543] Will wait 60s for crictl version
I1212 00:36:44.480773 104530 ssh_runner.go:195] Run: which crictl
I1212 00:36:44.484627 104530 command_runner.go:130] > /usr/bin/crictl
I1212 00:36:44.484837 104530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1212 00:36:44.546652 104530 command_runner.go:130] > Version: 0.1.0
I1212 00:36:44.546684 104530 command_runner.go:130] > RuntimeName: docker
I1212 00:36:44.546692 104530 command_runner.go:130] > RuntimeVersion: 24.0.7
I1212 00:36:44.546719 104530 command_runner.go:130] > RuntimeApiVersion: v1
I1212 00:36:44.548311 104530 start.go:559] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.7
RuntimeApiVersion: v1
I1212 00:36:44.548389 104530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1212 00:36:44.576456 104530 command_runner.go:130] > 24.0.7
I1212 00:36:44.576586 104530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1212 00:36:44.599730 104530 command_runner.go:130] > 24.0.7
I1212 00:36:44.602571 104530 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
I1212 00:36:44.602615 104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
I1212 00:36:44.605105 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:44.605567 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:44.605594 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:44.605828 104530 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1212 00:36:44.609867 104530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 00:36:44.622768 104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I1212 00:36:44.622818 104530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1212 00:36:44.642692 104530 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
I1212 00:36:44.642720 104530 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
I1212 00:36:44.642729 104530 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
I1212 00:36:44.642749 104530 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
I1212 00:36:44.642756 104530 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I1212 00:36:44.642764 104530 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I1212 00:36:44.642773 104530 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I1212 00:36:44.642785 104530 command_runner.go:130] > registry.k8s.io/pause:3.9
I1212 00:36:44.642793 104530 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I1212 00:36:44.642804 104530 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I1212 00:36:44.642841 104530 docker.go:671] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-scheduler:v1.28.4
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I1212 00:36:44.642858 104530 docker.go:601] Images already preloaded, skipping extraction
I1212 00:36:44.642930 104530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1212 00:36:44.661008 104530 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
I1212 00:36:44.661047 104530 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
I1212 00:36:44.661054 104530 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
I1212 00:36:44.661062 104530 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
I1212 00:36:44.661068 104530 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I1212 00:36:44.661084 104530 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I1212 00:36:44.661093 104530 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I1212 00:36:44.661108 104530 command_runner.go:130] > registry.k8s.io/pause:3.9
I1212 00:36:44.661116 104530 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I1212 00:36:44.661126 104530 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I1212 00:36:44.661894 104530 docker.go:671] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-scheduler:v1.28.4
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I1212 00:36:44.661911 104530 cache_images.go:84] Images are preloaded, skipping loading
I1212 00:36:44.661965 104530 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1212 00:36:44.688198 104530 command_runner.go:130] > cgroupfs
I1212 00:36:44.688431 104530 cni.go:84] Creating CNI manager for ""
I1212 00:36:44.688451 104530 cni.go:136] 2 nodes found, recommending kindnet
I1212 00:36:44.688483 104530 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1212 00:36:44.688527 104530 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.40 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-859606 NodeName:multinode-859606 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1212 00:36:44.688714 104530 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.40
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-859606"
kubeletExtraArgs:
node-ip: 192.168.39.40
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.40"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1212 00:36:44.688816 104530 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-859606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.40
[Install]
config:
{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1212 00:36:44.688879 104530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
I1212 00:36:44.697808 104530 command_runner.go:130] > kubeadm
I1212 00:36:44.697826 104530 command_runner.go:130] > kubectl
I1212 00:36:44.697831 104530 command_runner.go:130] > kubelet
I1212 00:36:44.697894 104530 binaries.go:44] Found k8s binaries, skipping transfer
I1212 00:36:44.697957 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1212 00:36:44.705971 104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
I1212 00:36:44.720935 104530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1212 00:36:44.735886 104530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
I1212 00:36:44.751846 104530 ssh_runner.go:195] Run: grep 192.168.39.40 control-plane.minikube.internal$ /etc/hosts
I1212 00:36:44.755479 104530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.40 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 00:36:44.767240 104530 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606 for IP: 192.168.39.40
I1212 00:36:44.767277 104530 certs.go:190] acquiring lock for shared ca certs: {Name:mk30ad7b34272eb8ac2c2d0da18d8d4f87fa28a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:36:44.767442 104530 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key
I1212 00:36:44.767492 104530 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key
I1212 00:36:44.767569 104530 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key
I1212 00:36:44.767614 104530 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key.7fcbe345
I1212 00:36:44.767658 104530 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key
I1212 00:36:44.767671 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1212 00:36:44.767685 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1212 00:36:44.767697 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1212 00:36:44.767709 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1212 00:36:44.767723 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1212 00:36:44.767736 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1212 00:36:44.767748 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1212 00:36:44.767759 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1212 00:36:44.767806 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem (1338 bytes)
W1212 00:36:44.767833 104530 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609_empty.pem, impossibly tiny 0 bytes
I1212 00:36:44.767842 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem (1679 bytes)
I1212 00:36:44.767866 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem (1078 bytes)
I1212 00:36:44.767895 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem (1123 bytes)
I1212 00:36:44.767941 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem (1679 bytes)
I1212 00:36:44.767991 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem (1708 bytes)
I1212 00:36:44.768017 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /usr/share/ca-certificates/876092.pem
I1212 00:36:44.768033 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1212 00:36:44.768048 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem -> /usr/share/ca-certificates/87609.pem
I1212 00:36:44.768657 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1212 00:36:44.791629 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1212 00:36:44.814579 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1212 00:36:44.837176 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1212 00:36:44.859769 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1212 00:36:44.882517 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1212 00:36:44.905279 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1212 00:36:44.927814 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1212 00:36:44.950936 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /usr/share/ca-certificates/876092.pem (1708 bytes)
I1212 00:36:44.973314 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1212 00:36:44.995879 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem --> /usr/share/ca-certificates/87609.pem (1338 bytes)
I1212 00:36:45.018814 104530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1212 00:36:45.034741 104530 ssh_runner.go:195] Run: openssl version
I1212 00:36:45.040084 104530 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I1212 00:36:45.040159 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1212 00:36:45.049710 104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1212 00:36:45.054223 104530 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
I1212 00:36:45.054253 104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
I1212 00:36:45.054292 104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1212 00:36:45.059527 104530 command_runner.go:130] > b5213941
I1212 00:36:45.059696 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1212 00:36:45.069012 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/87609.pem && ln -fs /usr/share/ca-certificates/87609.pem /etc/ssl/certs/87609.pem"
I1212 00:36:45.078693 104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/87609.pem
I1212 00:36:45.083070 104530 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:16 /usr/share/ca-certificates/87609.pem
I1212 00:36:45.083289 104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:16 /usr/share/ca-certificates/87609.pem
I1212 00:36:45.083354 104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/87609.pem
I1212 00:36:45.089122 104530 command_runner.go:130] > 51391683
I1212 00:36:45.089194 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/87609.pem /etc/ssl/certs/51391683.0"
I1212 00:36:45.099154 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/876092.pem && ln -fs /usr/share/ca-certificates/876092.pem /etc/ssl/certs/876092.pem"
I1212 00:36:45.108823 104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/876092.pem
I1212 00:36:45.113316 104530 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:16 /usr/share/ca-certificates/876092.pem
I1212 00:36:45.113568 104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:16 /usr/share/ca-certificates/876092.pem
I1212 00:36:45.113613 104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/876092.pem
I1212 00:36:45.118966 104530 command_runner.go:130] > 3ec20f2e
I1212 00:36:45.119043 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/876092.pem /etc/ssl/certs/3ec20f2e.0"
I1212 00:36:45.128635 104530 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I1212 00:36:45.132978 104530 command_runner.go:130] > ca.crt
I1212 00:36:45.132994 104530 command_runner.go:130] > ca.key
I1212 00:36:45.133000 104530 command_runner.go:130] > healthcheck-client.crt
I1212 00:36:45.133004 104530 command_runner.go:130] > healthcheck-client.key
I1212 00:36:45.133008 104530 command_runner.go:130] > peer.crt
I1212 00:36:45.133014 104530 command_runner.go:130] > peer.key
I1212 00:36:45.133018 104530 command_runner.go:130] > server.crt
I1212 00:36:45.133022 104530 command_runner.go:130] > server.key
I1212 00:36:45.133062 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1212 00:36:45.138700 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.138753 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1212 00:36:45.143928 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.143989 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1212 00:36:45.149974 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.150040 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1212 00:36:45.155645 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.155702 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1212 00:36:45.161120 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.161172 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1212 00:36:45.166435 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.166596 104530 kubeadm.go:404] StartCluster: {Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubev
irt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1212 00:36:45.166771 104530 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1212 00:36:45.186362 104530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1212 00:36:45.195450 104530 command_runner.go:130] > /var/lib/kubelet/config.yaml
I1212 00:36:45.195478 104530 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
I1212 00:36:45.195486 104530 command_runner.go:130] > /var/lib/minikube/etcd:
I1212 00:36:45.195492 104530 command_runner.go:130] > member
I1212 00:36:45.195591 104530 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I1212 00:36:45.195612 104530 kubeadm.go:636] restartCluster start
I1212 00:36:45.195674 104530 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1212 00:36:45.205557 104530 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1212 00:36:45.205994 104530 kubeconfig.go:135] verify returned: extract IP: "multinode-859606" does not appear in /home/jenkins/minikube-integration/17764-80294/kubeconfig
I1212 00:36:45.206105 104530 kubeconfig.go:146] "multinode-859606" context is missing from /home/jenkins/minikube-integration/17764-80294/kubeconfig - will repair!
I1212 00:36:45.206407 104530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-80294/kubeconfig: {Name:mkf7cdfdedbee22114abcb4b16af22e84438f3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:36:45.206781 104530 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17764-80294/kubeconfig
I1212 00:36:45.207021 104530 kapi.go:59] client config for multinode-859606: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key", CAFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1212 00:36:45.207626 104530 cert_rotation.go:137] Starting client certificate rotation controller
I1212 00:36:45.207759 104530 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1212 00:36:45.216109 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:45.216158 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:45.227128 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:45.227145 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:45.227181 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:45.237721 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:45.738433 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:45.738513 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:45.749916 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:46.238556 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:46.238626 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:46.249796 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:46.738436 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:46.738510 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:46.750275 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:47.238820 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:47.238918 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:47.250330 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:47.737880 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:47.737967 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:47.749173 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:48.238871 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:48.238981 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:48.250477 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:48.737907 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:48.737986 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:48.749969 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:49.238635 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:49.238729 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:49.250296 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:49.738397 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:49.738483 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:49.750014 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:50.238638 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:50.238725 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:50.250537 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:50.738104 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:50.738212 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:50.749728 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:51.238279 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:51.238383 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:51.249977 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:51.738590 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:51.738674 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:51.750353 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:52.237967 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:52.238033 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:52.249749 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:52.738311 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:52.738400 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:52.749734 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:53.238473 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:53.238570 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:53.249803 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:53.738439 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:53.738545 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:53.749846 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:54.238458 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:54.238551 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:54.250276 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:54.738396 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:54.738477 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:54.749594 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:55.216372 104530 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I1212 00:36:55.216413 104530 kubeadm.go:1135] stopping kube-system containers ...
I1212 00:36:55.216471 104530 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1212 00:36:55.242800 104530 command_runner.go:130] > abde5ad85d4a
I1212 00:36:55.242825 104530 command_runner.go:130] > 6960e84b00b8
I1212 00:36:55.242831 104530 command_runner.go:130] > 55413175770e
I1212 00:36:55.242840 104530 command_runner.go:130] > 56fd6254d6e1
I1212 00:36:55.242847 104530 command_runner.go:130] > b63a75f45416
I1212 00:36:55.242852 104530 command_runner.go:130] > 19421dc21753
I1212 00:36:55.242858 104530 command_runner.go:130] > ecfcbd586321
I1212 00:36:55.242864 104530 command_runner.go:130] > 9767a413586e
I1212 00:36:55.242869 104530 command_runner.go:130] > 4ba778c674f0
I1212 00:36:55.242874 104530 command_runner.go:130] > 19f9d76e8f1c
I1212 00:36:55.242880 104530 command_runner.go:130] > fc27b8583502
I1212 00:36:55.242885 104530 command_runner.go:130] > a49117d4a4c8
I1212 00:36:55.242891 104530 command_runner.go:130] > 5aa25d818283
I1212 00:36:55.242897 104530 command_runner.go:130] > ed0cff49857f
I1212 00:36:55.242904 104530 command_runner.go:130] > 510b18b7b6d6
I1212 00:36:55.242914 104530 command_runner.go:130] > 34ac7e63ee51
I1212 00:36:55.242922 104530 command_runner.go:130] > dc5d8378ca26
I1212 00:36:55.242929 104530 command_runner.go:130] > 335bd2869121
I1212 00:36:55.242939 104530 command_runner.go:130] > 10ca85c531dc
I1212 00:36:55.242951 104530 command_runner.go:130] > dcead5249b2f
I1212 00:36:55.242961 104530 command_runner.go:130] > c3360b039380
I1212 00:36:55.242971 104530 command_runner.go:130] > 08edfeaa5cab
I1212 00:36:55.242979 104530 command_runner.go:130] > 5c674269e2eb
I1212 00:36:55.242986 104530 command_runner.go:130] > e80fc43dacae
I1212 00:36:55.242994 104530 command_runner.go:130] > 547ce8660107
I1212 00:36:55.243001 104530 command_runner.go:130] > 6fce6e649e1a
I1212 00:36:55.243008 104530 command_runner.go:130] > 7db8deb95763
I1212 00:36:55.243015 104530 command_runner.go:130] > fef547bfcef9
I1212 00:36:55.243026 104530 command_runner.go:130] > afcf416fd476
I1212 00:36:55.243035 104530 command_runner.go:130] > d42aca9dd643
I1212 00:36:55.243041 104530 command_runner.go:130] > 757215f5e48f
I1212 00:36:55.243048 104530 command_runner.go:130] > f785241ab5c9
I1212 00:36:55.243103 104530 docker.go:469] Stopping containers: [abde5ad85d4a 6960e84b00b8 55413175770e 56fd6254d6e1 b63a75f45416 19421dc21753 ecfcbd586321 9767a413586e 4ba778c674f0 19f9d76e8f1c fc27b8583502 a49117d4a4c8 5aa25d818283 ed0cff49857f 510b18b7b6d6 34ac7e63ee51 dc5d8378ca26 335bd2869121 10ca85c531dc dcead5249b2f c3360b039380 08edfeaa5cab 5c674269e2eb e80fc43dacae 547ce8660107 6fce6e649e1a 7db8deb95763 fef547bfcef9 afcf416fd476 d42aca9dd643 757215f5e48f f785241ab5c9]
I1212 00:36:55.243180 104530 ssh_runner.go:195] Run: docker stop abde5ad85d4a 6960e84b00b8 55413175770e 56fd6254d6e1 b63a75f45416 19421dc21753 ecfcbd586321 9767a413586e 4ba778c674f0 19f9d76e8f1c fc27b8583502 a49117d4a4c8 5aa25d818283 ed0cff49857f 510b18b7b6d6 34ac7e63ee51 dc5d8378ca26 335bd2869121 10ca85c531dc dcead5249b2f c3360b039380 08edfeaa5cab 5c674269e2eb e80fc43dacae 547ce8660107 6fce6e649e1a 7db8deb95763 fef547bfcef9 afcf416fd476 d42aca9dd643 757215f5e48f f785241ab5c9
I1212 00:36:55.267560 104530 command_runner.go:130] > abde5ad85d4a
I1212 00:36:55.267589 104530 command_runner.go:130] > 6960e84b00b8
I1212 00:36:55.267595 104530 command_runner.go:130] > 55413175770e
I1212 00:36:55.267601 104530 command_runner.go:130] > 56fd6254d6e1
I1212 00:36:55.267608 104530 command_runner.go:130] > b63a75f45416
I1212 00:36:55.267613 104530 command_runner.go:130] > 19421dc21753
I1212 00:36:55.267630 104530 command_runner.go:130] > ecfcbd586321
I1212 00:36:55.267637 104530 command_runner.go:130] > 9767a413586e
I1212 00:36:55.267643 104530 command_runner.go:130] > 4ba778c674f0
I1212 00:36:55.267650 104530 command_runner.go:130] > 19f9d76e8f1c
I1212 00:36:55.267656 104530 command_runner.go:130] > fc27b8583502
I1212 00:36:55.267666 104530 command_runner.go:130] > a49117d4a4c8
I1212 00:36:55.267672 104530 command_runner.go:130] > 5aa25d818283
I1212 00:36:55.267679 104530 command_runner.go:130] > ed0cff49857f
I1212 00:36:55.267707 104530 command_runner.go:130] > 510b18b7b6d6
I1212 00:36:55.267723 104530 command_runner.go:130] > 34ac7e63ee51
I1212 00:36:55.267729 104530 command_runner.go:130] > dc5d8378ca26
I1212 00:36:55.267735 104530 command_runner.go:130] > 335bd2869121
I1212 00:36:55.267742 104530 command_runner.go:130] > 10ca85c531dc
I1212 00:36:55.267757 104530 command_runner.go:130] > dcead5249b2f
I1212 00:36:55.267764 104530 command_runner.go:130] > c3360b039380
I1212 00:36:55.267770 104530 command_runner.go:130] > 08edfeaa5cab
I1212 00:36:55.267779 104530 command_runner.go:130] > 5c674269e2eb
I1212 00:36:55.267785 104530 command_runner.go:130] > e80fc43dacae
I1212 00:36:55.267798 104530 command_runner.go:130] > 547ce8660107
I1212 00:36:55.267807 104530 command_runner.go:130] > 6fce6e649e1a
I1212 00:36:55.267816 104530 command_runner.go:130] > 7db8deb95763
I1212 00:36:55.267825 104530 command_runner.go:130] > fef547bfcef9
I1212 00:36:55.267834 104530 command_runner.go:130] > afcf416fd476
I1212 00:36:55.267843 104530 command_runner.go:130] > d42aca9dd643
I1212 00:36:55.267852 104530 command_runner.go:130] > 757215f5e48f
I1212 00:36:55.267861 104530 command_runner.go:130] > f785241ab5c9
I1212 00:36:55.268959 104530 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1212 00:36:55.283176 104530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1212 00:36:55.291931 104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I1212 00:36:55.291964 104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I1212 00:36:55.291973 104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I1212 00:36:55.291980 104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1212 00:36:55.292025 104530 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1212 00:36:55.292077 104530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1212 00:36:55.300972 104530 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1212 00:36:55.300994 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:36:55.409847 104530 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1212 00:36:55.410210 104530 command_runner.go:130] > [certs] Using existing ca certificate authority
I1212 00:36:55.410700 104530 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I1212 00:36:55.411130 104530 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1212 00:36:55.411654 104530 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
I1212 00:36:55.412107 104530 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
I1212 00:36:55.413059 104530 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
I1212 00:36:55.413464 104530 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
I1212 00:36:55.413846 104530 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
I1212 00:36:55.414303 104530 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1212 00:36:55.414667 104530 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I1212 00:36:55.416560 104530 command_runner.go:130] > [certs] Using the existing "sa" key
I1212 00:36:55.416642 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:36:56.211128 104530 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1212 00:36:56.211154 104530 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I1212 00:36:56.211167 104530 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1212 00:36:56.211176 104530 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1212 00:36:56.211190 104530 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1212 00:36:56.211225 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:36:56.277692 104530 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1212 00:36:56.278847 104530 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1212 00:36:56.278889 104530 command_runner.go:130] > [kubelet-start] Starting the kubelet
I1212 00:36:56.393138 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:36:56.490674 104530 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1212 00:36:56.490707 104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I1212 00:36:56.495141 104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1212 00:36:56.496969 104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I1212 00:36:56.505734 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:36:56.568063 104530 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1212 00:36:56.574809 104530 api_server.go:52] waiting for apiserver process to appear ...
I1212 00:36:56.574879 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:56.587806 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:57.100023 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:57.600145 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:58.099727 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:58.599716 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:59.099714 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:59.599934 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:37:00.099594 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:37:00.117319 104530 command_runner.go:130] > 1800
I1212 00:37:00.117686 104530 api_server.go:72] duration metric: took 3.542880083s to wait for apiserver process to appear ...
I1212 00:37:00.117709 104530 api_server.go:88] waiting for apiserver healthz status ...
I1212 00:37:00.117727 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:02.771626 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1212 00:37:02.771661 104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1212 00:37:02.771677 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:02.838010 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1212 00:37:02.838048 104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1212 00:37:03.338843 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:03.344825 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1212 00:37:03.344863 104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1212 00:37:03.838231 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:03.845511 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1212 00:37:03.845548 104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1212 00:37:04.339177 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:04.344349 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
ok
I1212 00:37:04.344445 104530 round_trippers.go:463] GET https://192.168.39.40:8443/version
I1212 00:37:04.344456 104530 round_trippers.go:469] Request Headers:
I1212 00:37:04.344469 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:04.344482 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:04.352515 104530 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I1212 00:37:04.352546 104530 round_trippers.go:577] Response Headers:
I1212 00:37:04.352557 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:04.352567 104530 round_trippers.go:580] Content-Length: 264
I1212 00:37:04.352575 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:04 GMT
I1212 00:37:04.352584 104530 round_trippers.go:580] Audit-Id: 63ee9643-66fd-4e1a-a212-0e71234e47a2
I1212 00:37:04.352591 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:04.352598 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:04.352608 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:04.352649 104530 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.4",
"gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
"gitTreeState": "clean",
"buildDate": "2023-11-15T16:48:54Z",
"goVersion": "go1.20.11",
"compiler": "gc",
"platform": "linux/amd64"
}
I1212 00:37:04.352786 104530 api_server.go:141] control plane version: v1.28.4
I1212 00:37:04.352817 104530 api_server.go:131] duration metric: took 4.235100574s to wait for apiserver health ...
I1212 00:37:04.352829 104530 cni.go:84] Creating CNI manager for ""
I1212 00:37:04.352840 104530 cni.go:136] 2 nodes found, recommending kindnet
I1212 00:37:04.355105 104530 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1212 00:37:04.356881 104530 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1212 00:37:04.363840 104530 command_runner.go:130] > File: /opt/cni/bin/portmap
I1212 00:37:04.363876 104530 command_runner.go:130] > Size: 2615256 Blocks: 5112 IO Block: 4096 regular file
I1212 00:37:04.363888 104530 command_runner.go:130] > Device: 11h/17d Inode: 3544 Links: 1
I1212 00:37:04.363897 104530 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I1212 00:37:04.363932 104530 command_runner.go:130] > Access: 2023-12-12 00:36:32.475977075 +0000
I1212 00:37:04.363942 104530 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
I1212 00:37:04.363949 104530 command_runner.go:130] > Change: 2023-12-12 00:36:30.674977075 +0000
I1212 00:37:04.363955 104530 command_runner.go:130] > Birth: -
I1212 00:37:04.364014 104530 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
I1212 00:37:04.364031 104530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I1212 00:37:04.384536 104530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1212 00:37:05.836837 104530 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I1212 00:37:05.848426 104530 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I1212 00:37:05.852488 104530 command_runner.go:130] > serviceaccount/kindnet unchanged
I1212 00:37:05.879402 104530 command_runner.go:130] > daemonset.apps/kindnet configured
I1212 00:37:05.888362 104530 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.503791012s)
I1212 00:37:05.888392 104530 system_pods.go:43] waiting for kube-system pods to appear ...
I1212 00:37:05.888502 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:05.888513 104530 round_trippers.go:469] Request Headers:
I1212 00:37:05.888524 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:05.888534 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:05.893619 104530 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I1212 00:37:05.893657 104530 round_trippers.go:577] Response Headers:
I1212 00:37:05.893666 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:05.893674 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:05.893682 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:05.893690 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:05.893699 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:05 GMT
I1212 00:37:05.893708 104530 round_trippers.go:580] Audit-Id: 0f783734-4de0-49f4-945d-a630ecccf305
I1212 00:37:05.895980 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1199"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
I1212 00:37:05.900061 104530 system_pods.go:59] 12 kube-system pods found
I1212 00:37:05.900092 104530 system_pods.go:61] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1212 00:37:05.900101 104530 system_pods.go:61] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1212 00:37:05.900106 104530 system_pods.go:61] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
I1212 00:37:05.900109 104530 system_pods.go:61] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
I1212 00:37:05.900116 104530 system_pods.go:61] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1212 00:37:05.900123 104530 system_pods.go:61] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1212 00:37:05.900135 104530 system_pods.go:61] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1212 00:37:05.900155 104530 system_pods.go:61] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
I1212 00:37:05.900164 104530 system_pods.go:61] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1212 00:37:05.900171 104530 system_pods.go:61] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
I1212 00:37:05.900176 104530 system_pods.go:61] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1212 00:37:05.900188 104530 system_pods.go:61] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1212 00:37:05.900194 104530 system_pods.go:74] duration metric: took 11.796772ms to wait for pod list to return data ...
I1212 00:37:05.900203 104530 node_conditions.go:102] verifying NodePressure condition ...
I1212 00:37:05.900268 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes
I1212 00:37:05.900277 104530 round_trippers.go:469] Request Headers:
I1212 00:37:05.900284 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:05.900293 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:05.902944 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:05.902977 104530 round_trippers.go:577] Response Headers:
I1212 00:37:05.902987 104530 round_trippers.go:580] Audit-Id: 81b09a2b-85f5-497e-b79a-4f9569b9a2e7
I1212 00:37:05.903000 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:05.903011 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:05.903018 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:05.903031 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:05.903044 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:05 GMT
I1212 00:37:05.903213 104530 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1199"},"items":[{"metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10135 chars]
I1212 00:37:05.903891 104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1212 00:37:05.903937 104530 node_conditions.go:123] node cpu capacity is 2
I1212 00:37:05.903961 104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1212 00:37:05.903967 104530 node_conditions.go:123] node cpu capacity is 2
I1212 00:37:05.903974 104530 node_conditions.go:105] duration metric: took 3.766372ms to run NodePressure ...
I1212 00:37:05.903993 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:37:06.226936 104530 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I1212 00:37:06.226983 104530 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I1212 00:37:06.227046 104530 kubeadm.go:772] waiting for restarted kubelet to initialise ...
I1212 00:37:06.227181 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
I1212 00:37:06.227195 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.227207 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.227216 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.231116 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:06.231139 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.231148 104530 round_trippers.go:580] Audit-Id: 69442a0f-0400-4b49-b627-328626316be1
I1212 00:37:06.231157 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.231166 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.231175 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.231194 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.231203 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.231655 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1202"},"items":[{"metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1175","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29766 chars]
I1212 00:37:06.233034 104530 kubeadm.go:787] kubelet initialised
I1212 00:37:06.233057 104530 kubeadm.go:788] duration metric: took 5.989168ms waiting for restarted kubelet to initialise ...
I1212 00:37:06.233070 104530 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:37:06.233145 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:06.233158 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.233168 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.233176 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.237466 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:06.237487 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.237497 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.237506 104530 round_trippers.go:580] Audit-Id: 39c8852d-e60c-4370-870d-ec951e0b6883
I1212 00:37:06.237515 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.237528 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.237540 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.237548 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.238857 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1202"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
I1212 00:37:06.242660 104530 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.242743 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:06.242753 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.242767 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.242780 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.245902 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:06.245916 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.245922 104530 round_trippers.go:580] Audit-Id: 992a9c9e-aaec-49ae-b76c-09a84a7382e6
I1212 00:37:06.245937 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.245952 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.245967 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.245974 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.245983 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.246223 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:06.246613 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:06.246627 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.246633 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.246640 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.248752 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:06.248771 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.248780 104530 round_trippers.go:580] Audit-Id: e035e5e3-4a98-439c-b13b-fca81955f3e3
I1212 00:37:06.248788 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.248796 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.248805 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.248820 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.248828 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.249002 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:06.249315 104530 pod_ready.go:97] node "multinode-859606" hosting pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.249335 104530 pod_ready.go:81] duration metric: took 6.646085ms waiting for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
E1212 00:37:06.249343 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.249367 104530 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.249423 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-859606
I1212 00:37:06.249431 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.249441 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.249459 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.251411 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:06.251431 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.251445 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.251453 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.251462 104530 round_trippers.go:580] Audit-Id: 78646abe-5066-4ba6-8d95-ec6fa44a1ab7
I1212 00:37:06.251469 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.251476 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.251486 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.251707 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1175","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6296 chars]
I1212 00:37:06.252098 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:06.252112 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.252121 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.252127 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.254083 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:06.254103 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.254111 104530 round_trippers.go:580] Audit-Id: 55b0d2ca-975d-4309-84a7-7cb9b1d8e361
I1212 00:37:06.254120 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.254128 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.254136 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.254144 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.254152 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.254323 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:06.254602 104530 pod_ready.go:97] node "multinode-859606" hosting pod "etcd-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.254619 104530 pod_ready.go:81] duration metric: took 5.239063ms waiting for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
E1212 00:37:06.254626 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "etcd-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.254639 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.254698 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-859606
I1212 00:37:06.254708 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.254715 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.254727 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.256930 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:06.256949 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.256958 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.256967 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.256974 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.256983 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.256991 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.257005 104530 round_trippers.go:580] Audit-Id: aa63f562-c9c3-453f-92e9-d6a4c4b3232f
I1212 00:37:06.257170 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-859606","namespace":"kube-system","uid":"0060efa7-dc06-439e-878f-b93b0e016326","resourceVersion":"1177","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.40:8443","kubernetes.io/config.hash":"6579d881f0553848179768317ac84853","kubernetes.io/config.mirror":"6579d881f0553848179768317ac84853","kubernetes.io/config.seen":"2023-12-12T00:29:55.207817853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1212 00:37:06.257538 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:06.257552 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.257558 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.257564 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.259425 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:06.259445 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.259455 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.259463 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.259471 104530 round_trippers.go:580] Audit-Id: 6b47a0d5-4136-488c-882b-b7fdd50344ce
I1212 00:37:06.259479 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.259487 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.259495 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.259782 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:06.260081 104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-apiserver-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.260097 104530 pod_ready.go:81] duration metric: took 5.449955ms waiting for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
E1212 00:37:06.260103 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-apiserver-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.260113 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.260178 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:06.260188 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.260196 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.260209 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.262963 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:06.262979 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.262988 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.262996 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.263012 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.263024 104530 round_trippers.go:580] Audit-Id: eb54b9e3-39c5-4e0b-975b-d574f9443f33
I1212 00:37:06.263034 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.263051 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.263697 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:06.289336 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:06.289371 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.289380 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.289385 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.292233 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:06.292251 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.292257 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.292263 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.292268 104530 round_trippers.go:580] Audit-Id: 436076e3-8b39-45e2-80a6-f8f174ee0ea6
I1212 00:37:06.292273 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.292280 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.292288 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.292641 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:06.293036 104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-controller-manager-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.293058 104530 pod_ready.go:81] duration metric: took 32.933264ms waiting for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
E1212 00:37:06.293071 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-controller-manager-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.293082 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.489501 104530 request.go:629] Waited for 196.342403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
I1212 00:37:06.489581 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
I1212 00:37:06.489586 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.489598 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.489608 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.493034 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:06.493071 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.493081 104530 round_trippers.go:580] Audit-Id: 0957bc6a-2f51-41b9-a929-11d0c801edd6
I1212 00:37:06.493089 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.493098 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.493113 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.493126 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.493134 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.493829 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6f6zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5931621-47fd-4f1a-bf46-813dd8352f00","resourceVersion":"1087","creationTimestamp":"2023-12-12T00:32:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:32:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I1212 00:37:06.688623 104530 request.go:629] Waited for 194.307311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
I1212 00:37:06.688686 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
I1212 00:37:06.688690 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.688698 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.688704 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.691344 104530 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1212 00:37:06.691361 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.691368 104530 round_trippers.go:580] Audit-Id: 5d88fdfd-6f2f-44b1-a736-b6120a7e5a78
I1212 00:37:06.691373 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.691390 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.691397 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.691405 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.691413 104530 round_trippers.go:580] Content-Length: 210
I1212 00:37:06.691425 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.691448 104530 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-859606-m03\" not found","reason":"NotFound","details":{"name":"multinode-859606-m03","kind":"nodes"},"code":404}
I1212 00:37:06.691655 104530 pod_ready.go:97] node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
I1212 00:37:06.691677 104530 pod_ready.go:81] duration metric: took 398.587524ms waiting for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
E1212 00:37:06.691686 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
I1212 00:37:06.691693 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.889174 104530 request.go:629] Waited for 197.369164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
I1212 00:37:06.889252 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
I1212 00:37:06.889259 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.889271 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.889280 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.893029 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:06.893047 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.893054 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.893093 104530 round_trippers.go:580] Audit-Id: 6846aa1b-42ae-4d5d-a1c7-384d5728840b
I1212 00:37:06.893108 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.893115 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.893120 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.893128 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.893282 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-prf7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"8238226c-3d01-4b91-963b-7360206b8615","resourceVersion":"1182","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5929 chars]
I1212 00:37:07.089197 104530 request.go:629] Waited for 195.360283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:07.089292 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:07.089298 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.089316 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.089322 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.091891 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:07.091927 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.091939 104530 round_trippers.go:580] Audit-Id: 1d65f568-2c4a-42d4-bbba-8be4bdc48dd6
I1212 00:37:07.091948 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.091961 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.091970 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.091979 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.091990 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.092224 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:07.092619 104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-proxy-prf7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:07.092640 104530 pod_ready.go:81] duration metric: took 400.940457ms waiting for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
E1212 00:37:07.092649 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-proxy-prf7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:07.092655 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
I1212 00:37:07.289085 104530 request.go:629] Waited for 196.361677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
I1212 00:37:07.289150 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
I1212 00:37:07.289155 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.289165 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.289173 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.292103 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:07.292128 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.292139 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.292147 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.292160 104530 round_trippers.go:580] Audit-Id: 4abc3eb7-8c82-4d87-b6ea-4f96f5e08936
I1212 00:37:07.292172 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.292182 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.292187 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.292410 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q9h26","generateName":"kube-proxy-","namespace":"kube-system","uid":"7dd12033-bf81-4cd3-a412-3fe3211dc87b","resourceVersion":"978","creationTimestamp":"2023-12-12T00:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I1212 00:37:07.489267 104530 request.go:629] Waited for 196.338554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
I1212 00:37:07.489349 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
I1212 00:37:07.489362 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.489373 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.489380 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.491859 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:07.491887 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.491897 104530 round_trippers.go:580] Audit-Id: a3f5d27d-a101-460d-9f23-04a20e185c6f
I1212 00:37:07.491907 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.491930 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.491943 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.491952 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.491959 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.492124 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606-m02","uid":"4dead465-c032-4274-8147-a5a7d38c1bf5","resourceVersion":"1083","creationTimestamp":"2023-12-12T00:34:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_35_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:34:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
I1212 00:37:07.492453 104530 pod_ready.go:92] pod "kube-proxy-q9h26" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:07.492469 104530 pod_ready.go:81] duration metric: took 399.80822ms waiting for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
I1212 00:37:07.492483 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:07.688932 104530 request.go:629] Waited for 196.377404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
I1212 00:37:07.689024 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
I1212 00:37:07.689047 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.689062 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.689086 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.692055 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:07.692076 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.692083 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.692088 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.692094 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.692101 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.692109 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.692118 104530 round_trippers.go:580] Audit-Id: 8c31c43b-819b-4283-9d9f-35f04a7e36e9
I1212 00:37:07.692273 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-859606","namespace":"kube-system","uid":"19a4264c-6ba5-44f4-8419-6f04d6224c92","resourceVersion":"1173","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.mirror":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.seen":"2023-12-12T00:29:55.207819594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
I1212 00:37:07.889054 104530 request.go:629] Waited for 196.353748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:07.889117 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:07.889125 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.889137 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.889151 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.892167 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:07.892188 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.892194 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.892200 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.892226 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.892241 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.892250 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.892257 104530 round_trippers.go:580] Audit-Id: 9ee0618c-b043-4e2b-9e76-9d15b5ac7dc7
I1212 00:37:07.892403 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:07.892746 104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-scheduler-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:07.892773 104530 pod_ready.go:81] duration metric: took 400.280036ms waiting for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
E1212 00:37:07.892785 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-scheduler-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:07.892824 104530 pod_ready.go:38] duration metric: took 1.659742815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:37:07.892857 104530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1212 00:37:07.904430 104530 command_runner.go:130] > -16
I1212 00:37:07.904886 104530 ops.go:34] apiserver oom_adj: -16
I1212 00:37:07.904899 104530 kubeadm.go:640] restartCluster took 22.709280238s
I1212 00:37:07.904906 104530 kubeadm.go:406] StartCluster complete in 22.738318179s
I1212 00:37:07.904921 104530 settings.go:142] acquiring lock: {Name:mk78e6f78084358f8434def169cefe6a62407a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:37:07.904985 104530 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17764-80294/kubeconfig
I1212 00:37:07.905654 104530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-80294/kubeconfig: {Name:mkf7cdfdedbee22114abcb4b16af22e84438f3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:37:07.905860 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1212 00:37:07.906001 104530 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
I1212 00:37:07.909257 104530 out.go:177] * Enabled addons:
I1212 00:37:07.906240 104530 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17764-80294/kubeconfig
I1212 00:37:07.906246 104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:37:07.910860 104530 addons.go:502] enable addons completed in 4.865147ms: enabled=[]
I1212 00:37:07.911128 104530 kapi.go:59] client config for multinode-859606: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key", CAFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1212 00:37:07.911447 104530 round_trippers.go:463] GET https://192.168.39.40:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I1212 00:37:07.911463 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.911471 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.911477 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.914264 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:07.914281 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.914291 104530 round_trippers.go:580] Audit-Id: 48f5a121-1933-4a22-a355-5496f01879d3
I1212 00:37:07.914299 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.914306 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.914317 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.914324 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.914335 104530 round_trippers.go:580] Content-Length: 292
I1212 00:37:07.914346 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.914379 104530 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"75766566-fdf3-4c8a-abaa-ce458e02b129","resourceVersion":"1201","creationTimestamp":"2023-12-12T00:30:03Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I1212 00:37:07.914516 104530 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-859606" context rescaled to 1 replicas
I1212 00:37:07.914548 104530 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
I1212 00:37:07.917208 104530 out.go:177] * Verifying Kubernetes components...
I1212 00:37:07.918721 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1212 00:37:08.110540 104530 command_runner.go:130] > apiVersion: v1
I1212 00:37:08.110578 104530 command_runner.go:130] > data:
I1212 00:37:08.110585 104530 command_runner.go:130] > Corefile: |
I1212 00:37:08.110591 104530 command_runner.go:130] > .:53 {
I1212 00:37:08.110596 104530 command_runner.go:130] > log
I1212 00:37:08.110602 104530 command_runner.go:130] > errors
I1212 00:37:08.110608 104530 command_runner.go:130] > health {
I1212 00:37:08.110614 104530 command_runner.go:130] > lameduck 5s
I1212 00:37:08.110620 104530 command_runner.go:130] > }
I1212 00:37:08.110627 104530 command_runner.go:130] > ready
I1212 00:37:08.110636 104530 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I1212 00:37:08.110647 104530 command_runner.go:130] > pods insecure
I1212 00:37:08.110655 104530 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I1212 00:37:08.110667 104530 command_runner.go:130] > ttl 30
I1212 00:37:08.110673 104530 command_runner.go:130] > }
I1212 00:37:08.110683 104530 command_runner.go:130] > prometheus :9153
I1212 00:37:08.110693 104530 command_runner.go:130] > hosts {
I1212 00:37:08.110705 104530 command_runner.go:130] > 192.168.39.1 host.minikube.internal
I1212 00:37:08.110714 104530 command_runner.go:130] > fallthrough
I1212 00:37:08.110724 104530 command_runner.go:130] > }
I1212 00:37:08.110732 104530 command_runner.go:130] > forward . /etc/resolv.conf {
I1212 00:37:08.110737 104530 command_runner.go:130] > max_concurrent 1000
I1212 00:37:08.110743 104530 command_runner.go:130] > }
I1212 00:37:08.110748 104530 command_runner.go:130] > cache 30
I1212 00:37:08.110755 104530 command_runner.go:130] > loop
I1212 00:37:08.110761 104530 command_runner.go:130] > reload
I1212 00:37:08.110765 104530 command_runner.go:130] > loadbalance
I1212 00:37:08.110771 104530 command_runner.go:130] > }
I1212 00:37:08.110776 104530 command_runner.go:130] > kind: ConfigMap
I1212 00:37:08.110782 104530 command_runner.go:130] > metadata:
I1212 00:37:08.110787 104530 command_runner.go:130] > creationTimestamp: "2023-12-12T00:30:03Z"
I1212 00:37:08.110793 104530 command_runner.go:130] > name: coredns
I1212 00:37:08.110797 104530 command_runner.go:130] > namespace: kube-system
I1212 00:37:08.110804 104530 command_runner.go:130] > resourceVersion: "407"
I1212 00:37:08.110808 104530 command_runner.go:130] > uid: 58df000b-e223-4f9f-a0ce-e6a345bc8b1e
I1212 00:37:08.110871 104530 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
I1212 00:37:08.110910 104530 node_ready.go:35] waiting up to 6m0s for node "multinode-859606" to be "Ready" ...
I1212 00:37:08.111108 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:08.111132 104530 round_trippers.go:469] Request Headers:
I1212 00:37:08.111144 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:08.111155 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:08.115592 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:08.115608 104530 round_trippers.go:577] Response Headers:
I1212 00:37:08.115615 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:08.115620 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:08 GMT
I1212 00:37:08.115625 104530 round_trippers.go:580] Audit-Id: 78e22458-8a23-48e3-9e27-578febb59a20
I1212 00:37:08.115630 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:08.115635 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:08.115640 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:08.116255 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:08.289077 104530 request.go:629] Waited for 172.38964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:08.289150 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:08.289155 104530 round_trippers.go:469] Request Headers:
I1212 00:37:08.289163 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:08.289178 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:08.291767 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:08.291787 104530 round_trippers.go:577] Response Headers:
I1212 00:37:08.291797 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:08.291806 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:08 GMT
I1212 00:37:08.291817 104530 round_trippers.go:580] Audit-Id: bd808d02-17db-44e3-ae16-8f55b7323fe8
I1212 00:37:08.291829 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:08.291841 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:08.291852 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:08.292123 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:08.793301 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:08.793331 104530 round_trippers.go:469] Request Headers:
I1212 00:37:08.793340 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:08.793346 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:08.796482 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:08.796514 104530 round_trippers.go:577] Response Headers:
I1212 00:37:08.796525 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:08.796533 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:08.796539 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:08.796544 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:08 GMT
I1212 00:37:08.796549 104530 round_trippers.go:580] Audit-Id: f551640f-6397-4f2f-ad7b-75e7a1ad4ab4
I1212 00:37:08.796554 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:08.796722 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:09.293409 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:09.293442 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.293453 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.293461 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.296451 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:09.296469 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.296477 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.296482 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.296487 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.296496 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.296519 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.296527 104530 round_trippers.go:580] Audit-Id: 2a8eef1a-1ec0-43cd-aba1-3dcd1603fa87
I1212 00:37:09.296803 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:09.793597 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:09.793626 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.793645 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.793664 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.796604 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:09.796624 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.796631 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.796636 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.796644 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.796649 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.796654 104530 round_trippers.go:580] Audit-Id: 022e877a-18b3-43f9-ab6d-dff649dfc9f8
I1212 00:37:09.796659 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.796949 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:09.797279 104530 node_ready.go:49] node "multinode-859606" has status "Ready":"True"
I1212 00:37:09.797303 104530 node_ready.go:38] duration metric: took 1.686360286s waiting for node "multinode-859606" to be "Ready" ...
I1212 00:37:09.797315 104530 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:37:09.797375 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:09.797386 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.797396 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.797406 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.801844 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:09.801867 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.801876 104530 round_trippers.go:580] Audit-Id: 420ea970-9f48-457c-b0f7-7ec9ec1a588e
I1212 00:37:09.801885 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.801894 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.801904 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.801927 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.801938 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.803506 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1216"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83879 chars]
I1212 00:37:09.806061 104530 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
I1212 00:37:09.806150 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:09.806162 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.806174 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.806184 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.808345 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:09.808361 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.808374 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.808383 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.808397 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.808405 104530 round_trippers.go:580] Audit-Id: 9a9463c1-b358-492e-b922-367c6104207c
I1212 00:37:09.808413 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.808422 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.808706 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:09.809215 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:09.809231 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.809238 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.809244 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.811292 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:09.811307 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.811316 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.811323 104530 round_trippers.go:580] Audit-Id: f5ebccd1-dc5e-4d64-b27a-f59d7a10b2c3
I1212 00:37:09.811331 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.811346 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.811359 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.811367 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.811572 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:09.812037 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:09.812052 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.812059 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.812065 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.813996 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:09.814010 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.814019 104530 round_trippers.go:580] Audit-Id: e587521b-4190-4251-9713-9fe4cfdc8df1
I1212 00:37:09.814027 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.814034 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.814043 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.814054 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.814063 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.814382 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:09.889078 104530 request.go:629] Waited for 74.284522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:09.889133 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:09.889139 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.889148 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.889154 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.892171 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:09.892194 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.892203 104530 round_trippers.go:580] Audit-Id: 6c0b5759-dcf0-429c-88bf-c342959f386c
I1212 00:37:09.892229 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.892241 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.892250 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.892269 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.892283 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.892510 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:10.393716 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:10.393745 104530 round_trippers.go:469] Request Headers:
I1212 00:37:10.393755 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:10.393763 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:10.396859 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:10.396889 104530 round_trippers.go:577] Response Headers:
I1212 00:37:10.396899 104530 round_trippers.go:580] Audit-Id: 5e8103b3-ec4e-4213-995d-24c751476571
I1212 00:37:10.396907 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:10.396915 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:10.396923 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:10.396931 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:10.396939 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:10 GMT
I1212 00:37:10.397178 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:10.397682 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:10.397698 104530 round_trippers.go:469] Request Headers:
I1212 00:37:10.397713 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:10.397722 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:10.399962 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:10.399981 104530 round_trippers.go:577] Response Headers:
I1212 00:37:10.399991 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:10 GMT
I1212 00:37:10.399999 104530 round_trippers.go:580] Audit-Id: 63def391-cbb3-428c-8bda-86f13b98f5c0
I1212 00:37:10.400014 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:10.400026 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:10.400035 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:10.400046 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:10.400207 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:10.894000 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:10.894037 104530 round_trippers.go:469] Request Headers:
I1212 00:37:10.894048 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:10.894057 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:10.899308 104530 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I1212 00:37:10.899334 104530 round_trippers.go:577] Response Headers:
I1212 00:37:10.899344 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:10.899355 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:10.899362 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:10.899369 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:10 GMT
I1212 00:37:10.899377 104530 round_trippers.go:580] Audit-Id: a6f54ff0-c318-428c-9e20-5afa1d44815f
I1212 00:37:10.899383 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:10.899671 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:10.900196 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:10.900212 104530 round_trippers.go:469] Request Headers:
I1212 00:37:10.900219 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:10.900225 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:10.902531 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:10.902550 104530 round_trippers.go:577] Response Headers:
I1212 00:37:10.902560 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:10.902568 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:10.902576 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:10 GMT
I1212 00:37:10.902586 104530 round_trippers.go:580] Audit-Id: 72a3507b-3092-4d9e-bfa5-e84c0a5f5811
I1212 00:37:10.902599 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:10.902610 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:10.902856 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:11.393521 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:11.393559 104530 round_trippers.go:469] Request Headers:
I1212 00:37:11.393569 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:11.393583 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:11.397962 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:11.398001 104530 round_trippers.go:577] Response Headers:
I1212 00:37:11.398012 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:11.398020 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:11.398028 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:11.398036 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:11 GMT
I1212 00:37:11.398048 104530 round_trippers.go:580] Audit-Id: 36163564-e6ac-4456-b495-9930bf8c7c95
I1212 00:37:11.398056 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:11.399514 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:11.400077 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:11.400105 104530 round_trippers.go:469] Request Headers:
I1212 00:37:11.400115 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:11.400129 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:11.402841 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:11.402874 104530 round_trippers.go:577] Response Headers:
I1212 00:37:11.402895 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:11 GMT
I1212 00:37:11.402903 104530 round_trippers.go:580] Audit-Id: cf888e1f-3585-4d4c-b47a-d65c1b673f60
I1212 00:37:11.402913 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:11.402923 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:11.402936 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:11.402944 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:11.403152 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:11.893890 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:11.893921 104530 round_trippers.go:469] Request Headers:
I1212 00:37:11.893930 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:11.893936 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:11.896885 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:11.896910 104530 round_trippers.go:577] Response Headers:
I1212 00:37:11.896920 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:11.896927 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:11.896934 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:11.896942 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:11.896949 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:11 GMT
I1212 00:37:11.896956 104530 round_trippers.go:580] Audit-Id: 560ccbf4-a93e-418b-97ef-b02d5b4a7c2a
I1212 00:37:11.897291 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:11.897761 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:11.897778 104530 round_trippers.go:469] Request Headers:
I1212 00:37:11.897785 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:11.897791 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:11.900338 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:11.900381 104530 round_trippers.go:577] Response Headers:
I1212 00:37:11.900391 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:11 GMT
I1212 00:37:11.900400 104530 round_trippers.go:580] Audit-Id: 57fad163-7798-4518-b48a-afffca40ee66
I1212 00:37:11.900408 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:11.900416 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:11.900428 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:11.900438 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:11.900617 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:11.900907 104530 pod_ready.go:102] pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace has status "Ready":"False"
I1212 00:37:12.393289 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:12.393323 104530 round_trippers.go:469] Request Headers:
I1212 00:37:12.393337 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:12.393346 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:12.397658 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:12.397679 104530 round_trippers.go:577] Response Headers:
I1212 00:37:12.397686 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:12.397691 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:12 GMT
I1212 00:37:12.397697 104530 round_trippers.go:580] Audit-Id: 97d200a8-1144-4cfb-b7e7-ae622c67a09e
I1212 00:37:12.397702 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:12.397707 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:12.397712 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:12.398001 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:12.398453 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:12.398468 104530 round_trippers.go:469] Request Headers:
I1212 00:37:12.398475 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:12.398480 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:12.401097 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:12.401115 104530 round_trippers.go:577] Response Headers:
I1212 00:37:12.401122 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:12 GMT
I1212 00:37:12.401127 104530 round_trippers.go:580] Audit-Id: 2a27c4e6-1e77-48fe-b9ff-18537a1ba771
I1212 00:37:12.401135 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:12.401145 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:12.401153 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:12.401168 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:12.401283 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:12.893943 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:12.893969 104530 round_trippers.go:469] Request Headers:
I1212 00:37:12.893977 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:12.893984 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:12.897025 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:12.897047 104530 round_trippers.go:577] Response Headers:
I1212 00:37:12.897057 104530 round_trippers.go:580] Audit-Id: 551ec886-a3c8-4be6-946b-459f81574f91
I1212 00:37:12.897064 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:12.897071 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:12.897082 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:12.897091 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:12.897103 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:12 GMT
I1212 00:37:12.897283 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:12.898253 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:12.898328 104530 round_trippers.go:469] Request Headers:
I1212 00:37:12.898343 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:12.898352 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:12.902125 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:12.902151 104530 round_trippers.go:577] Response Headers:
I1212 00:37:12.902161 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:12 GMT
I1212 00:37:12.902171 104530 round_trippers.go:580] Audit-Id: bb98bd7a-c04d-437d-aef6-72f5de2e6aac
I1212 00:37:12.902182 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:12.902196 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:12.902214 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:12.902227 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:12.902594 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:13.393264 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:13.393294 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.393307 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.393317 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.396512 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:13.396534 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.396541 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.396546 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.396552 104530 round_trippers.go:580] Audit-Id: 7f6212d1-aaf4-45df-a3b0-bb989bb1227a
I1212 00:37:13.396560 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.396569 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.396578 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.396776 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:13.397248 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:13.397262 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.397270 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.397275 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.399404 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:13.399423 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.399433 104530 round_trippers.go:580] Audit-Id: 77e44ea3-4125-4d4b-9450-f85475c1539a
I1212 00:37:13.399440 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.399447 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.399454 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.399464 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.399471 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.399656 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:13.893292 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:13.893317 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.893325 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.893331 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.896458 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:13.896475 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.896487 104530 round_trippers.go:580] Audit-Id: ac46caca-dc3e-4d98-bda6-e430bb1fa8ae
I1212 00:37:13.896494 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.896512 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.896519 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.896526 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.896534 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.897107 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
I1212 00:37:13.897587 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:13.897603 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.897613 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.897621 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.900547 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:13.900568 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.900578 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.900586 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.900595 104530 round_trippers.go:580] Audit-Id: e3dbde9a-cc4a-4762-867f-d9e9a410aef1
I1212 00:37:13.900603 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.900611 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.900643 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.900901 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:13.901209 104530 pod_ready.go:92] pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:13.901226 104530 pod_ready.go:81] duration metric: took 4.09514334s waiting for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
I1212 00:37:13.901265 104530 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:13.901326 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-859606
I1212 00:37:13.901336 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.901346 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.901356 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.903529 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:13.903549 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.903558 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.903566 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.903574 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.903582 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.903590 104530 round_trippers.go:580] Audit-Id: d34bc26a-3f02-4be9-9af2-1ad0fadfbfa3
I1212 00:37:13.903596 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.903967 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1218","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6072 chars]
I1212 00:37:13.904430 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:13.904447 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.904454 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.904460 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.906383 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:13.906404 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.906413 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.906420 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.906429 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.906444 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.906453 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.906466 104530 round_trippers.go:580] Audit-Id: 3f37632a-0e9f-4887-b36f-43d17d2e4134
I1212 00:37:13.906620 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:13.906989 104530 pod_ready.go:92] pod "etcd-multinode-859606" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:13.907016 104530 pod_ready.go:81] duration metric: took 5.741099ms waiting for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:13.907041 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:13.907100 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-859606
I1212 00:37:13.907110 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.907118 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.907125 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.909221 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:13.909237 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.909245 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.909253 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.909260 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.909267 104530 round_trippers.go:580] Audit-Id: 10369159-e62c-4dd4-8d77-2e82a59d784d
I1212 00:37:13.909275 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.909287 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.909569 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-859606","namespace":"kube-system","uid":"0060efa7-dc06-439e-878f-b93b0e016326","resourceVersion":"1216","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.40:8443","kubernetes.io/config.hash":"6579d881f0553848179768317ac84853","kubernetes.io/config.mirror":"6579d881f0553848179768317ac84853","kubernetes.io/config.seen":"2023-12-12T00:29:55.207817853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7607 chars]
I1212 00:37:13.909929 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:13.909943 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.909953 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.909961 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.911781 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:13.911800 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.911808 104530 round_trippers.go:580] Audit-Id: c9f36dd0-0f04-4274-9537-6c203e1b93b8
I1212 00:37:13.911817 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.911825 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.911833 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.911841 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.911848 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.912152 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:13.912472 104530 pod_ready.go:92] pod "kube-apiserver-multinode-859606" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:13.912489 104530 pod_ready.go:81] duration metric: took 5.438494ms waiting for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:13.912497 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:14.088914 104530 request.go:629] Waited for 176.352891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:14.089000 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:14.089007 104530 round_trippers.go:469] Request Headers:
I1212 00:37:14.089021 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:14.089037 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:14.092809 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:14.092835 104530 round_trippers.go:577] Response Headers:
I1212 00:37:14.092845 104530 round_trippers.go:580] Audit-Id: 2c2f7c55-459e-4d01-a3f2-96b1b6cb8c8b
I1212 00:37:14.092853 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:14.092861 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:14.092869 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:14.092876 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:14.092885 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:14 GMT
I1212 00:37:14.093110 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:14.288948 104530 request.go:629] Waited for 195.377005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:14.289023 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:14.289032 104530 round_trippers.go:469] Request Headers:
I1212 00:37:14.289039 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:14.289053 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:14.291661 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:14.291688 104530 round_trippers.go:577] Response Headers:
I1212 00:37:14.291699 104530 round_trippers.go:580] Audit-Id: 9a8ff279-becc-4981-a5d3-bab45d355f5b
I1212 00:37:14.291709 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:14.291716 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:14.291721 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:14.291729 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:14.291734 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:14 GMT
I1212 00:37:14.291936 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:14.489383 104530 request.go:629] Waited for 197.063929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:14.489461 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:14.489467 104530 round_trippers.go:469] Request Headers:
I1212 00:37:14.489475 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:14.489481 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:14.492357 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:14.492379 104530 round_trippers.go:577] Response Headers:
I1212 00:37:14.492386 104530 round_trippers.go:580] Audit-Id: 12e5b7b5-fd32-4fe6-b1ff-eb7b4430f001
I1212 00:37:14.492392 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:14.492397 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:14.492402 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:14.492407 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:14.492412 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:14 GMT
I1212 00:37:14.492593 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:14.689101 104530 request.go:629] Waited for 196.091909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:14.689191 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:14.689198 104530 round_trippers.go:469] Request Headers:
I1212 00:37:14.689208 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:14.689218 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:14.691837 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:14.691858 104530 round_trippers.go:577] Response Headers:
I1212 00:37:14.691865 104530 round_trippers.go:580] Audit-Id: 46cb3999-d30b-4074-ad3e-89d7533c5936
I1212 00:37:14.691870 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:14.691875 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:14.691880 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:14.691885 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:14.691891 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:14 GMT
I1212 00:37:14.692335 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:15.193200 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:15.193224 104530 round_trippers.go:469] Request Headers:
I1212 00:37:15.193232 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:15.193239 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:15.196981 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:15.197000 104530 round_trippers.go:577] Response Headers:
I1212 00:37:15.197006 104530 round_trippers.go:580] Audit-Id: e9469ca3-765f-4b94-bad8-b62081cb2809
I1212 00:37:15.197012 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:15.197034 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:15.197042 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:15.197049 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:15.197056 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:15 GMT
I1212 00:37:15.197197 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:15.197635 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:15.197650 104530 round_trippers.go:469] Request Headers:
I1212 00:37:15.197657 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:15.197663 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:15.199909 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:15.199943 104530 round_trippers.go:577] Response Headers:
I1212 00:37:15.199952 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:15 GMT
I1212 00:37:15.199959 104530 round_trippers.go:580] Audit-Id: 55872ce3-0e31-4a29-bd8d-2fef53f7f5ad
I1212 00:37:15.199967 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:15.199975 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:15.199983 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:15.199991 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:15.200167 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:15.693002 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:15.693027 104530 round_trippers.go:469] Request Headers:
I1212 00:37:15.693035 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:15.693041 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:15.695104 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:15.695127 104530 round_trippers.go:577] Response Headers:
I1212 00:37:15.695138 104530 round_trippers.go:580] Audit-Id: e8dafcef-e232-4564-93ec-c99146d453a6
I1212 00:37:15.695144 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:15.695152 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:15.695161 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:15.695170 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:15.695180 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:15 GMT
I1212 00:37:15.695539 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:15.695954 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:15.695966 104530 round_trippers.go:469] Request Headers:
I1212 00:37:15.695974 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:15.695979 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:15.697613 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:15.697631 104530 round_trippers.go:577] Response Headers:
I1212 00:37:15.697640 104530 round_trippers.go:580] Audit-Id: cd894f72-99d1-44a1-ba36-abb33011003a
I1212 00:37:15.697649 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:15.697656 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:15.697661 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:15.697666 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:15.697671 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:15 GMT
I1212 00:37:15.697922 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:16.193670 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:16.193698 104530 round_trippers.go:469] Request Headers:
I1212 00:37:16.193707 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:16.193712 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:16.196864 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:16.196891 104530 round_trippers.go:577] Response Headers:
I1212 00:37:16.196899 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:16.196904 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:16.196909 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:16 GMT
I1212 00:37:16.196920 104530 round_trippers.go:580] Audit-Id: 7e651bce-3845-4b66-8fb2-622327e8d40b
I1212 00:37:16.196928 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:16.196936 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:16.197330 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:16.197766 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:16.197783 104530 round_trippers.go:469] Request Headers:
I1212 00:37:16.197790 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:16.197796 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:16.200198 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:16.200219 104530 round_trippers.go:577] Response Headers:
I1212 00:37:16.200225 104530 round_trippers.go:580] Audit-Id: 9972e939-1cb4-4a78-8c0d-11a91b0625a8
I1212 00:37:16.200230 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:16.200235 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:16.200241 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:16.200249 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:16.200254 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:16 GMT
I1212 00:37:16.200367 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:16.200638 104530 pod_ready.go:102] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"False"
I1212 00:37:16.693040 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:16.693064 104530 round_trippers.go:469] Request Headers:
I1212 00:37:16.693073 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:16.693090 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:16.696324 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:16.696344 104530 round_trippers.go:577] Response Headers:
I1212 00:37:16.696354 104530 round_trippers.go:580] Audit-Id: cfb4110b-a12c-4dd5-bb27-d5b38a9bdf99
I1212 00:37:16.696363 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:16.696371 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:16.696380 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:16.696388 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:16.696393 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:16 GMT
I1212 00:37:16.696757 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:16.697175 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:16.697186 104530 round_trippers.go:469] Request Headers:
I1212 00:37:16.697193 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:16.697199 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:16.699444 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:16.699466 104530 round_trippers.go:577] Response Headers:
I1212 00:37:16.699482 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:16.699489 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:16.699508 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:16.699514 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:16.699519 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:16 GMT
I1212 00:37:16.699524 104530 round_trippers.go:580] Audit-Id: 86f1d394-268f-4773-8a4f-65dfa15966b3
I1212 00:37:16.699786 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:17.193535 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:17.193562 104530 round_trippers.go:469] Request Headers:
I1212 00:37:17.193571 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:17.193577 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:17.197001 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:17.197029 104530 round_trippers.go:577] Response Headers:
I1212 00:37:17.197039 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:17.197048 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:17.197056 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:17.197063 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:17 GMT
I1212 00:37:17.197078 104530 round_trippers.go:580] Audit-Id: 0039bd07-2809-441c-8a08-a005a1fb9474
I1212 00:37:17.197086 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:17.197590 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:17.198195 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:17.198215 104530 round_trippers.go:469] Request Headers:
I1212 00:37:17.198227 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:17.198235 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:17.200561 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:17.200580 104530 round_trippers.go:577] Response Headers:
I1212 00:37:17.200594 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:17.200602 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:17.200608 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:17 GMT
I1212 00:37:17.200615 104530 round_trippers.go:580] Audit-Id: 7ca59026-3641-45f9-af2d-e56b2f15bbf4
I1212 00:37:17.200623 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:17.200631 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:17.200818 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:17.693526 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:17.693559 104530 round_trippers.go:469] Request Headers:
I1212 00:37:17.693573 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:17.693581 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:17.696472 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:17.696503 104530 round_trippers.go:577] Response Headers:
I1212 00:37:17.696515 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:17 GMT
I1212 00:37:17.696522 104530 round_trippers.go:580] Audit-Id: f3b1cbfa-67ea-48ba-a602-3e51e26733e7
I1212 00:37:17.696529 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:17.696537 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:17.696546 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:17.696556 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:17.696733 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:17.697203 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:17.697219 104530 round_trippers.go:469] Request Headers:
I1212 00:37:17.697230 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:17.697237 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:17.699246 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:17.699267 104530 round_trippers.go:577] Response Headers:
I1212 00:37:17.699274 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:17.699279 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:17.699284 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:17.699289 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:17.699303 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:17 GMT
I1212 00:37:17.699311 104530 round_trippers.go:580] Audit-Id: 537e896a-ad01-467d-8765-b18cc048639c
I1212 00:37:17.699750 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:18.193513 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:18.193539 104530 round_trippers.go:469] Request Headers:
I1212 00:37:18.193547 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:18.193553 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:18.196642 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:18.196663 104530 round_trippers.go:577] Response Headers:
I1212 00:37:18.196670 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:18.196675 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:18.196680 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:18.196685 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:18.196690 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:18 GMT
I1212 00:37:18.196695 104530 round_trippers.go:580] Audit-Id: 01c5b2b7-3578-4302-9a5b-dbb75c34b269
I1212 00:37:18.197211 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:18.197615 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:18.197626 104530 round_trippers.go:469] Request Headers:
I1212 00:37:18.197637 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:18.197645 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:18.199967 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:18.199986 104530 round_trippers.go:577] Response Headers:
I1212 00:37:18.199995 104530 round_trippers.go:580] Audit-Id: b956bf4f-9b6c-4de6-87c0-84916a54c9aa
I1212 00:37:18.200004 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:18.200012 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:18.200019 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:18.200027 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:18.200035 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:18 GMT
I1212 00:37:18.200333 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:18.692979 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:18.693006 104530 round_trippers.go:469] Request Headers:
I1212 00:37:18.693014 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:18.693021 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:18.696863 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:18.696888 104530 round_trippers.go:577] Response Headers:
I1212 00:37:18.696895 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:18.696901 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:18 GMT
I1212 00:37:18.696906 104530 round_trippers.go:580] Audit-Id: d5c6e54d-aaea-4bf3-8a70-4dc0b57b264e
I1212 00:37:18.696911 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:18.696916 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:18.696921 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:18.697946 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:18.698353 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:18.698366 104530 round_trippers.go:469] Request Headers:
I1212 00:37:18.698373 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:18.698381 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:18.700609 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:18.700629 104530 round_trippers.go:577] Response Headers:
I1212 00:37:18.700639 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:18.700647 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:18 GMT
I1212 00:37:18.700655 104530 round_trippers.go:580] Audit-Id: 0dde864e-ad38-4768-932a-24947963eeef
I1212 00:37:18.700662 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:18.700669 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:18.700677 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:18.700840 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:18.701109 104530 pod_ready.go:102] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"False"
I1212 00:37:19.193617 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:19.193643 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.193652 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.193658 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.197048 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:19.197071 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.197078 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.197083 104530 round_trippers.go:580] Audit-Id: 20502bbb-60e6-48d0-b283-2696575d955f
I1212 00:37:19.197090 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.197095 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.197100 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.197106 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.197298 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1240","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
I1212 00:37:19.197741 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:19.197753 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.197760 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.197766 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.199854 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.199879 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.199889 104530 round_trippers.go:580] Audit-Id: d3c788eb-c748-41e7-8b78-70c1417d3584
I1212 00:37:19.199898 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.199907 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.199932 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.199946 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.199954 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.200107 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:19.200426 104530 pod_ready.go:92] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:19.200447 104530 pod_ready.go:81] duration metric: took 5.287942632s waiting for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.200463 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.200518 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
I1212 00:37:19.200527 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.200538 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.200547 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.203112 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.203134 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.203143 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.203151 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.203159 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.203168 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.203177 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.203185 104530 round_trippers.go:580] Audit-Id: d4bddcbb-39f6-4c08-83da-2d4523904cda
I1212 00:37:19.203320 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6f6zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5931621-47fd-4f1a-bf46-813dd8352f00","resourceVersion":"1087","creationTimestamp":"2023-12-12T00:32:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:32:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I1212 00:37:19.203874 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
I1212 00:37:19.203896 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.203907 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.203928 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.206014 104530 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1212 00:37:19.206033 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.206049 104530 round_trippers.go:580] Content-Length: 210
I1212 00:37:19.206061 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.206068 104530 round_trippers.go:580] Audit-Id: 4aef6f8a-43a6-4188-a386-e5e2d3a1f6f3
I1212 00:37:19.206082 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.206089 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.206097 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.206105 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.206236 104530 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-859606-m03\" not found","reason":"NotFound","details":{"name":"multinode-859606-m03","kind":"nodes"},"code":404}
I1212 00:37:19.206386 104530 pod_ready.go:97] node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
I1212 00:37:19.206408 104530 pod_ready.go:81] duration metric: took 5.937337ms waiting for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
E1212 00:37:19.206423 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
I1212 00:37:19.206431 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.206494 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
I1212 00:37:19.206504 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.206515 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.206527 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.208365 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:19.208385 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.208394 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.208403 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.208418 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.208426 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.208437 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.208447 104530 round_trippers.go:580] Audit-Id: c0033a2c-2985-4a9c-95d1-b824f5e20713
I1212 00:37:19.208684 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-prf7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"8238226c-3d01-4b91-963b-7360206b8615","resourceVersion":"1206","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
I1212 00:37:19.209132 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:19.209150 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.209164 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.209177 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.210970 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:19.210988 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.210997 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.211006 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.211020 104530 round_trippers.go:580] Audit-Id: 396956f0-54b8-4778-ab7c-a37fe9b33b2e
I1212 00:37:19.211027 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.211041 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.211052 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.211256 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:19.211606 104530 pod_ready.go:92] pod "kube-proxy-prf7f" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:19.211630 104530 pod_ready.go:81] duration metric: took 5.187099ms waiting for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.211641 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.288985 104530 request.go:629] Waited for 77.268211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
I1212 00:37:19.289047 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
I1212 00:37:19.289060 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.289074 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.289085 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.291884 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.291923 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.291934 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.291943 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.291954 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.291962 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.291969 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.291984 104530 round_trippers.go:580] Audit-Id: f9222a80-11b7-4070-b9c2-ea9633cc9696
I1212 00:37:19.292162 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q9h26","generateName":"kube-proxy-","namespace":"kube-system","uid":"7dd12033-bf81-4cd3-a412-3fe3211dc87b","resourceVersion":"978","creationTimestamp":"2023-12-12T00:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I1212 00:37:19.489027 104530 request.go:629] Waited for 196.400938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
I1212 00:37:19.489092 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
I1212 00:37:19.489097 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.489104 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.489111 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.492013 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.492033 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.492040 104530 round_trippers.go:580] Audit-Id: 78f39b63-2309-4f9b-bec7-2fb901d235db
I1212 00:37:19.492045 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.492051 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.492060 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.492069 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.492078 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.492270 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606-m02","uid":"4dead465-c032-4274-8147-a5a7d38c1bf5","resourceVersion":"1083","creationTimestamp":"2023-12-12T00:34:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_35_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:34:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
I1212 00:37:19.492641 104530 pod_ready.go:92] pod "kube-proxy-q9h26" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:19.492662 104530 pod_ready.go:81] duration metric: took 281.010934ms waiting for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.492672 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.688873 104530 request.go:629] Waited for 196.137127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
I1212 00:37:19.688950 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
I1212 00:37:19.688955 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.688963 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.688969 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.691734 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.691755 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.691762 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.691767 104530 round_trippers.go:580] Audit-Id: f7675bf4-e31a-4738-b42f-be7859177fe3
I1212 00:37:19.691772 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.691777 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.691783 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.691788 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.692171 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-859606","namespace":"kube-system","uid":"19a4264c-6ba5-44f4-8419-6f04d6224c92","resourceVersion":"1215","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.mirror":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.seen":"2023-12-12T00:29:55.207819594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
I1212 00:37:19.888908 104530 request.go:629] Waited for 196.296036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:19.888977 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:19.888982 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.888989 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.888996 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.891677 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.891697 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.891704 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.891710 104530 round_trippers.go:580] Audit-Id: 05fc06a3-8feb-45d4-9823-a6b2852345e9
I1212 00:37:19.891723 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.891735 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.891745 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.891754 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.892212 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:19.892531 104530 pod_ready.go:92] pod "kube-scheduler-multinode-859606" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:19.892549 104530 pod_ready.go:81] duration metric: took 399.870057ms waiting for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.892566 104530 pod_ready.go:38] duration metric: took 10.095238343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:37:19.892585 104530 api_server.go:52] waiting for apiserver process to appear ...
I1212 00:37:19.892637 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:37:19.905440 104530 command_runner.go:130] > 1800
I1212 00:37:19.905932 104530 api_server.go:72] duration metric: took 11.991353984s to wait for apiserver process to appear ...
I1212 00:37:19.905947 104530 api_server.go:88] waiting for apiserver healthz status ...
I1212 00:37:19.905967 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:19.912545 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
ok
I1212 00:37:19.912608 104530 round_trippers.go:463] GET https://192.168.39.40:8443/version
I1212 00:37:19.912620 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.912630 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.912637 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.913604 104530 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I1212 00:37:19.913622 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.913631 104530 round_trippers.go:580] Audit-Id: a90e5deb-2922-43fe-bcfb-bbd1e68986eb
I1212 00:37:19.913640 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.913655 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.913663 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.913674 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.913683 104530 round_trippers.go:580] Content-Length: 264
I1212 00:37:19.913691 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.913714 104530 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.4",
"gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
"gitTreeState": "clean",
"buildDate": "2023-11-15T16:48:54Z",
"goVersion": "go1.20.11",
"compiler": "gc",
"platform": "linux/amd64"
}
I1212 00:37:19.913766 104530 api_server.go:141] control plane version: v1.28.4
I1212 00:37:19.913784 104530 api_server.go:131] duration metric: took 7.830198ms to wait for apiserver health ...
I1212 00:37:19.913794 104530 system_pods.go:43] waiting for kube-system pods to appear ...
I1212 00:37:20.089251 104530 request.go:629] Waited for 175.374729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:20.089344 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:20.089351 104530 round_trippers.go:469] Request Headers:
I1212 00:37:20.089363 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:20.089370 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:20.093974 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:20.094001 104530 round_trippers.go:577] Response Headers:
I1212 00:37:20.094009 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:20.094016 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:20 GMT
I1212 00:37:20.094024 104530 round_trippers.go:580] Audit-Id: a00499e6-5aa6-4108-b030-bb102abafbdd
I1212 00:37:20.094032 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:20.094055 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:20.094065 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:20.095252 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83341 chars]
I1212 00:37:20.098784 104530 system_pods.go:59] 12 kube-system pods found
I1212 00:37:20.098809 104530 system_pods.go:61] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running
I1212 00:37:20.098814 104530 system_pods.go:61] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running
I1212 00:37:20.098820 104530 system_pods.go:61] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
I1212 00:37:20.098826 104530 system_pods.go:61] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
I1212 00:37:20.098832 104530 system_pods.go:61] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running
I1212 00:37:20.098839 104530 system_pods.go:61] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running
I1212 00:37:20.098853 104530 system_pods.go:61] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running
I1212 00:37:20.098864 104530 system_pods.go:61] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
I1212 00:37:20.098870 104530 system_pods.go:61] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running
I1212 00:37:20.098877 104530 system_pods.go:61] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
I1212 00:37:20.098887 104530 system_pods.go:61] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running
I1212 00:37:20.098896 104530 system_pods.go:61] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running
I1212 00:37:20.098906 104530 system_pods.go:74] duration metric: took 185.102197ms to wait for pod list to return data ...
I1212 00:37:20.098917 104530 default_sa.go:34] waiting for default service account to be created ...
I1212 00:37:20.289369 104530 request.go:629] Waited for 190.371344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/default/serviceaccounts
I1212 00:37:20.289426 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/default/serviceaccounts
I1212 00:37:20.289431 104530 round_trippers.go:469] Request Headers:
I1212 00:37:20.289439 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:20.289445 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:20.292334 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:20.292356 104530 round_trippers.go:577] Response Headers:
I1212 00:37:20.292380 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:20.292392 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:20.292406 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:20.292429 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:20.292440 104530 round_trippers.go:580] Content-Length: 262
I1212 00:37:20.292445 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:20 GMT
I1212 00:37:20.292452 104530 round_trippers.go:580] Audit-Id: fcc27580-a669-4f4d-a44c-e2fc099e94e8
I1212 00:37:20.292478 104530 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b7226be9-2d9e-41aa-a29f-25b2631acf72","resourceVersion":"337","creationTimestamp":"2023-12-12T00:30:16Z"}}]}
I1212 00:37:20.292693 104530 default_sa.go:45] found service account: "default"
I1212 00:37:20.292714 104530 default_sa.go:55] duration metric: took 193.787623ms for default service account to be created ...
I1212 00:37:20.292723 104530 system_pods.go:116] waiting for k8s-apps to be running ...
I1212 00:37:20.489190 104530 request.go:629] Waited for 196.390334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:20.489259 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:20.489264 104530 round_trippers.go:469] Request Headers:
I1212 00:37:20.489281 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:20.489299 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:20.493457 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:20.493482 104530 round_trippers.go:577] Response Headers:
I1212 00:37:20.493501 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:20.493511 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:20.493519 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:20.493534 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:20.493541 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:20 GMT
I1212 00:37:20.493545 104530 round_trippers.go:580] Audit-Id: b5e27102-8247-4af2-81d0-d5c782e978b9
I1212 00:37:20.495018 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83341 chars]
I1212 00:37:20.497464 104530 system_pods.go:86] 12 kube-system pods found
I1212 00:37:20.497487 104530 system_pods.go:89] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running
I1212 00:37:20.497492 104530 system_pods.go:89] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running
I1212 00:37:20.497498 104530 system_pods.go:89] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
I1212 00:37:20.497505 104530 system_pods.go:89] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
I1212 00:37:20.497520 104530 system_pods.go:89] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running
I1212 00:37:20.497528 104530 system_pods.go:89] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running
I1212 00:37:20.497543 104530 system_pods.go:89] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running
I1212 00:37:20.497550 104530 system_pods.go:89] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
I1212 00:37:20.497554 104530 system_pods.go:89] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running
I1212 00:37:20.497560 104530 system_pods.go:89] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
I1212 00:37:20.497565 104530 system_pods.go:89] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running
I1212 00:37:20.497571 104530 system_pods.go:89] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running
I1212 00:37:20.497579 104530 system_pods.go:126] duration metric: took 204.845476ms to wait for k8s-apps to be running ...
I1212 00:37:20.497589 104530 system_svc.go:44] waiting for kubelet service to be running ....
I1212 00:37:20.497645 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1212 00:37:20.514001 104530 system_svc.go:56] duration metric: took 16.405003ms WaitForService to wait for kubelet.
I1212 00:37:20.514018 104530 kubeadm.go:581] duration metric: took 12.599444535s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1212 00:37:20.514036 104530 node_conditions.go:102] verifying NodePressure condition ...
I1212 00:37:20.689493 104530 request.go:629] Waited for 175.357994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes
I1212 00:37:20.689560 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes
I1212 00:37:20.689567 104530 round_trippers.go:469] Request Headers:
I1212 00:37:20.689580 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:20.689590 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:20.692705 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:20.692723 104530 round_trippers.go:577] Response Headers:
I1212 00:37:20.692730 104530 round_trippers.go:580] Audit-Id: 1464068b-baf2-48bc-ba66-087651c82097
I1212 00:37:20.692735 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:20.692740 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:20.692752 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:20.692766 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:20.692774 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:20 GMT
I1212 00:37:20.693088 104530 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10008 chars]
I1212 00:37:20.693685 104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1212 00:37:20.693709 104530 node_conditions.go:123] node cpu capacity is 2
I1212 00:37:20.693723 104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1212 00:37:20.693735 104530 node_conditions.go:123] node cpu capacity is 2
I1212 00:37:20.693741 104530 node_conditions.go:105] duration metric: took 179.70085ms to run NodePressure ...
I1212 00:37:20.693757 104530 start.go:228] waiting for startup goroutines ...
I1212 00:37:20.693768 104530 start.go:233] waiting for cluster config update ...
I1212 00:37:20.693780 104530 start.go:242] writing updated cluster config ...
I1212 00:37:20.694346 104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:37:20.694464 104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
I1212 00:37:20.697216 104530 out.go:177] * Starting worker node multinode-859606-m02 in cluster multinode-859606
I1212 00:37:20.698351 104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I1212 00:37:20.698370 104530 cache.go:56] Caching tarball of preloaded images
I1212 00:37:20.698473 104530 preload.go:174] Found /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1212 00:37:20.698483 104530 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I1212 00:37:20.698567 104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
I1212 00:37:20.698742 104530 start.go:365] acquiring machines lock for multinode-859606-m02: {Name:mk381e91746c2e5b8a4620fe3fd447d80375e413 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1212 00:37:20.698785 104530 start.go:369] acquired machines lock for "multinode-859606-m02" in 25.605µs
I1212 00:37:20.698798 104530 start.go:96] Skipping create...Using existing machine configuration
I1212 00:37:20.698805 104530 fix.go:54] fixHost starting: m02
I1212 00:37:20.699049 104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:37:20.699070 104530 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:37:20.713769 104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39383
I1212 00:37:20.714173 104530 main.go:141] libmachine: () Calling .GetVersion
I1212 00:37:20.714616 104530 main.go:141] libmachine: Using API Version 1
I1212 00:37:20.714644 104530 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:37:20.714957 104530 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:37:20.715148 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:20.715321 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetState
I1212 00:37:20.716762 104530 fix.go:102] recreateIfNeeded on multinode-859606-m02: state=Stopped err=<nil>
I1212 00:37:20.716788 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
W1212 00:37:20.716969 104530 fix.go:128] unexpected machine state, will restart: <nil>
I1212 00:37:20.718972 104530 out.go:177] * Restarting existing kvm2 VM for "multinode-859606-m02" ...
I1212 00:37:20.720351 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .Start
I1212 00:37:20.720531 104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring networks are active...
I1212 00:37:20.721224 104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring network default is active
I1212 00:37:20.721668 104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring network mk-multinode-859606 is active
I1212 00:37:20.722168 104530 main.go:141] libmachine: (multinode-859606-m02) Getting domain xml...
I1212 00:37:20.722963 104530 main.go:141] libmachine: (multinode-859606-m02) Creating domain...
I1212 00:37:21.957474 104530 main.go:141] libmachine: (multinode-859606-m02) Waiting to get IP...
I1212 00:37:21.958335 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:21.958740 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:21.958796 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:21.958699 104802 retry.go:31] will retry after 282.895442ms: waiting for machine to come up
I1212 00:37:22.243280 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:22.243745 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:22.243773 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.243699 104802 retry.go:31] will retry after 387.587998ms: waiting for machine to come up
I1212 00:37:22.633350 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:22.633841 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:22.633875 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.633770 104802 retry.go:31] will retry after 299.810803ms: waiting for machine to come up
I1212 00:37:22.935179 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:22.935627 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:22.935662 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.935567 104802 retry.go:31] will retry after 368.460834ms: waiting for machine to come up
I1212 00:37:23.306050 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:23.306531 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:23.306554 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:23.306486 104802 retry.go:31] will retry after 567.761569ms: waiting for machine to come up
I1212 00:37:23.876187 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:23.876658 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:23.876692 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:23.876603 104802 retry.go:31] will retry after 673.685642ms: waiting for machine to come up
I1212 00:37:24.551471 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:24.551879 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:24.551932 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:24.551825 104802 retry.go:31] will retry after 837.913991ms: waiting for machine to come up
I1212 00:37:25.391781 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:25.392075 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:25.392106 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:25.392038 104802 retry.go:31] will retry after 1.006695939s: waiting for machine to come up
I1212 00:37:26.400658 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:26.401136 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:26.401168 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:26.401063 104802 retry.go:31] will retry after 1.662996951s: waiting for machine to come up
I1212 00:37:28.065937 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:28.066407 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:28.066429 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:28.066363 104802 retry.go:31] will retry after 2.272536479s: waiting for machine to come up
I1212 00:37:30.341875 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:30.342336 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:30.342380 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:30.342274 104802 retry.go:31] will retry after 1.895134507s: waiting for machine to come up
I1212 00:37:32.239315 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:32.239701 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:32.239736 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:32.239637 104802 retry.go:31] will retry after 2.566822425s: waiting for machine to come up
I1212 00:37:34.808939 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:34.809382 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:34.809406 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:34.809339 104802 retry.go:31] will retry after 4.439419543s: waiting for machine to come up
I1212 00:37:39.249907 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.250290 104530 main.go:141] libmachine: (multinode-859606-m02) Found IP for machine: 192.168.39.65
I1212 00:37:39.250320 104530 main.go:141] libmachine: (multinode-859606-m02) Reserving static IP address...
I1212 00:37:39.250342 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has current primary IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.250818 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "multinode-859606-m02", mac: "52:54:00:ea:e9:13", ip: "192.168.39.65"} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.250858 104530 main.go:141] libmachine: (multinode-859606-m02) Reserved static IP address: 192.168.39.65
I1212 00:37:39.250878 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | skip adding static IP to network mk-multinode-859606 - found existing host DHCP lease matching {name: "multinode-859606-m02", mac: "52:54:00:ea:e9:13", ip: "192.168.39.65"}
I1212 00:37:39.250889 104530 main.go:141] libmachine: (multinode-859606-m02) Waiting for SSH to be available...
I1212 00:37:39.250909 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Getting to WaitForSSH function...
I1212 00:37:39.253228 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.253705 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.253733 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.253879 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Using SSH client type: external
I1212 00:37:39.253906 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa (-rw-------)
I1212 00:37:39.253933 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I1212 00:37:39.253947 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | About to run SSH command:
I1212 00:37:39.253968 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | exit 0
I1212 00:37:39.347723 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | SSH cmd err, output: <nil>:
I1212 00:37:39.348137 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetConfigRaw
I1212 00:37:39.348792 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
I1212 00:37:39.351240 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.351592 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.351628 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.351860 104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
I1212 00:37:39.352092 104530 machine.go:88] provisioning docker machine ...
I1212 00:37:39.352113 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:39.352303 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
I1212 00:37:39.352445 104530 buildroot.go:166] provisioning hostname "multinode-859606-m02"
I1212 00:37:39.352470 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
I1212 00:37:39.352609 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:39.354957 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.355309 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.355339 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.355537 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:39.355716 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.355867 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.355992 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:39.356149 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:39.356637 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:39.356656 104530 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-859606-m02 && echo "multinode-859606-m02" | sudo tee /etc/hostname
I1212 00:37:39.502532 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-859606-m02
I1212 00:37:39.502568 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:39.505328 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.505789 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.505823 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.505999 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:39.506231 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.506373 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.506531 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:39.506708 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:39.507067 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:39.507085 104530 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-859606-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-859606-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-859606-m02' | sudo tee -a /etc/hosts;
fi
fi
I1212 00:37:39.645009 104530 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1212 00:37:39.645036 104530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17764-80294/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-80294/.minikube}
I1212 00:37:39.645051 104530 buildroot.go:174] setting up certificates
I1212 00:37:39.645059 104530 provision.go:83] configureAuth start
I1212 00:37:39.645068 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
I1212 00:37:39.645319 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
I1212 00:37:39.648244 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.648695 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.648726 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.648891 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:39.651280 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.651603 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.651634 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.651775 104530 provision.go:138] copyHostCerts
I1212 00:37:39.651810 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
I1212 00:37:39.651849 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem, removing ...
I1212 00:37:39.651862 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
I1212 00:37:39.651958 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem (1078 bytes)
I1212 00:37:39.652055 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
I1212 00:37:39.652080 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem, removing ...
I1212 00:37:39.652087 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
I1212 00:37:39.652126 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem (1123 bytes)
I1212 00:37:39.652240 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
I1212 00:37:39.652270 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem, removing ...
I1212 00:37:39.652278 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
I1212 00:37:39.652320 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem (1679 bytes)
I1212 00:37:39.652413 104530 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem org=jenkins.multinode-859606-m02 san=[192.168.39.65 192.168.39.65 localhost 127.0.0.1 minikube multinode-859606-m02]
I1212 00:37:39.786080 104530 provision.go:172] copyRemoteCerts
I1212 00:37:39.786162 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1212 00:37:39.786193 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:39.788840 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.789107 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.789147 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.789364 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:39.789559 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.789730 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:39.789868 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
I1212 00:37:39.884832 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1212 00:37:39.884920 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1212 00:37:39.908744 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem -> /etc/docker/server.pem
I1212 00:37:39.908817 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I1212 00:37:39.932380 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1212 00:37:39.932446 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1212 00:37:39.956816 104530 provision.go:86] duration metric: configureAuth took 311.743914ms
I1212 00:37:39.956853 104530 buildroot.go:189] setting minikube options for container-runtime
I1212 00:37:39.957091 104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:37:39.957118 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:39.957389 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:39.960094 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.960494 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.960529 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.960669 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:39.960847 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.961048 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.961181 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:39.961346 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:39.961722 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:39.961740 104530 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1212 00:37:40.093977 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I1212 00:37:40.094012 104530 buildroot.go:70] root file system type: tmpfs
I1212 00:37:40.094174 104530 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1212 00:37:40.094208 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:40.097149 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:40.097507 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:40.097534 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:40.097760 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:40.098013 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:40.098210 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:40.098318 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:40.098507 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:40.098848 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:40.098916 104530 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.40"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1212 00:37:40.241326 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.40
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1212 00:37:40.241355 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:40.243925 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:40.244271 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:40.244296 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:40.244504 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:40.244693 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:40.244875 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:40.245023 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:40.245173 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:40.245547 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:40.245565 104530 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1212 00:37:41.126250 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1212 00:37:41.126280 104530 machine.go:91] provisioned docker machine in 1.774172725s
I1212 00:37:41.126296 104530 start.go:300] post-start starting for "multinode-859606-m02" (driver="kvm2")
I1212 00:37:41.126310 104530 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1212 00:37:41.126329 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:41.126679 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1212 00:37:41.126707 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:41.129504 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.129833 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:41.129866 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.130073 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:41.130301 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:41.130478 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:41.130687 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
I1212 00:37:41.225898 104530 ssh_runner.go:195] Run: cat /etc/os-release
I1212 00:37:41.230065 104530 command_runner.go:130] > NAME=Buildroot
I1212 00:37:41.230089 104530 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
I1212 00:37:41.230096 104530 command_runner.go:130] > ID=buildroot
I1212 00:37:41.230109 104530 command_runner.go:130] > VERSION_ID=2021.02.12
I1212 00:37:41.230117 104530 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I1212 00:37:41.230251 104530 info.go:137] Remote host: Buildroot 2021.02.12
I1212 00:37:41.230275 104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/addons for local assets ...
I1212 00:37:41.230351 104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/files for local assets ...
I1212 00:37:41.230452 104530 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> 876092.pem in /etc/ssl/certs
I1212 00:37:41.230466 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /etc/ssl/certs/876092.pem
I1212 00:37:41.230586 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1212 00:37:41.239133 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /etc/ssl/certs/876092.pem (1708 bytes)
I1212 00:37:41.262487 104530 start.go:303] post-start completed in 136.174154ms
I1212 00:37:41.262513 104530 fix.go:56] fixHost completed within 20.563707335s
I1212 00:37:41.262539 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:41.265240 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.265538 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:41.265572 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.265778 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:41.265950 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:41.266126 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:41.266310 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:41.266489 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:41.266856 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:41.266871 104530 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1212 00:37:41.396610 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702341461.344204788
I1212 00:37:41.396638 104530 fix.go:206] guest clock: 1702341461.344204788
I1212 00:37:41.396649 104530 fix.go:219] Guest: 2023-12-12 00:37:41.344204788 +0000 UTC Remote: 2023-12-12 00:37:41.262521516 +0000 UTC m=+81.745766897 (delta=81.683272ms)
I1212 00:37:41.396669 104530 fix.go:190] guest clock delta is within tolerance: 81.683272ms
I1212 00:37:41.396676 104530 start.go:83] releasing machines lock for "multinode-859606-m02", held for 20.697881438s
I1212 00:37:41.396707 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:41.396998 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
I1212 00:37:41.399794 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.400251 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:41.400284 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.402301 104530 out.go:177] * Found network options:
I1212 00:37:41.403745 104530 out.go:177] - NO_PROXY=192.168.39.40
W1212 00:37:41.404991 104530 proxy.go:119] fail to check proxy env: Error ip not in block
I1212 00:37:41.405014 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:41.405584 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:41.405757 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:41.405832 104530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1212 00:37:41.405875 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
W1212 00:37:41.405953 104530 proxy.go:119] fail to check proxy env: Error ip not in block
I1212 00:37:41.406034 104530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1212 00:37:41.406061 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:41.408298 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.408470 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.408704 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:41.408734 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.408860 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:41.408890 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:41.408931 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.409042 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:41.409170 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:41.409276 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:41.409448 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:41.409487 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:41.409614 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
I1212 00:37:41.409611 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
I1212 00:37:41.504163 104530 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W1212 00:37:41.504453 104530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1212 00:37:41.504528 104530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1212 00:37:41.528894 104530 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I1212 00:37:41.528955 104530 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I1212 00:37:41.529013 104530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1212 00:37:41.529030 104530 start.go:475] detecting cgroup driver to use...
I1212 00:37:41.529132 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:37:41.549871 104530 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I1212 00:37:41.549952 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I1212 00:37:41.559926 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1212 00:37:41.569604 104530 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1212 00:37:41.569669 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1212 00:37:41.578872 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:37:41.588052 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1212 00:37:41.597753 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:37:41.607940 104530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1212 00:37:41.618063 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1212 00:37:41.628111 104530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1212 00:37:41.637202 104530 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I1212 00:37:41.637321 104530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1212 00:37:41.645675 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:37:41.756330 104530 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1212 00:37:41.774116 104530 start.go:475] detecting cgroup driver to use...
I1212 00:37:41.774203 104530 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1212 00:37:41.790254 104530 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I1212 00:37:41.790292 104530 command_runner.go:130] > [Unit]
I1212 00:37:41.790304 104530 command_runner.go:130] > Description=Docker Application Container Engine
I1212 00:37:41.790313 104530 command_runner.go:130] > Documentation=https://docs.docker.com
I1212 00:37:41.790321 104530 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I1212 00:37:41.790329 104530 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I1212 00:37:41.790357 104530 command_runner.go:130] > StartLimitBurst=3
I1212 00:37:41.790372 104530 command_runner.go:130] > StartLimitIntervalSec=60
I1212 00:37:41.790377 104530 command_runner.go:130] > [Service]
I1212 00:37:41.790387 104530 command_runner.go:130] > Type=notify
I1212 00:37:41.790391 104530 command_runner.go:130] > Restart=on-failure
I1212 00:37:41.790396 104530 command_runner.go:130] > Environment=NO_PROXY=192.168.39.40
I1212 00:37:41.790406 104530 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I1212 00:37:41.790421 104530 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I1212 00:37:41.790437 104530 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I1212 00:37:41.790453 104530 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I1212 00:37:41.790463 104530 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I1212 00:37:41.790474 104530 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I1212 00:37:41.790485 104530 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I1212 00:37:41.790548 104530 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I1212 00:37:41.790571 104530 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I1212 00:37:41.790578 104530 command_runner.go:130] > ExecStart=
I1212 00:37:41.790612 104530 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I1212 00:37:41.790624 104530 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I1212 00:37:41.790640 104530 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I1212 00:37:41.790650 104530 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I1212 00:37:41.790654 104530 command_runner.go:130] > LimitNOFILE=infinity
I1212 00:37:41.790662 104530 command_runner.go:130] > LimitNPROC=infinity
I1212 00:37:41.790671 104530 command_runner.go:130] > LimitCORE=infinity
I1212 00:37:41.790681 104530 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I1212 00:37:41.790693 104530 command_runner.go:130] > # Only systemd 226 and above support this version.
I1212 00:37:41.790703 104530 command_runner.go:130] > TasksMax=infinity
I1212 00:37:41.790718 104530 command_runner.go:130] > TimeoutStartSec=0
I1212 00:37:41.790729 104530 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I1212 00:37:41.790740 104530 command_runner.go:130] > Delegate=yes
I1212 00:37:41.790749 104530 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I1212 00:37:41.790764 104530 command_runner.go:130] > KillMode=process
I1212 00:37:41.790774 104530 command_runner.go:130] > [Install]
I1212 00:37:41.790781 104530 command_runner.go:130] > WantedBy=multi-user.target
I1212 00:37:41.790852 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1212 00:37:41.807010 104530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1212 00:37:41.831315 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1212 00:37:41.843702 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:37:41.855452 104530 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1212 00:37:41.887392 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:37:41.900115 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:37:41.917122 104530 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I1212 00:37:41.917212 104530 ssh_runner.go:195] Run: which cri-dockerd
I1212 00:37:41.920948 104530 command_runner.go:130] > /usr/bin/cri-dockerd
I1212 00:37:41.921049 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1212 00:37:41.929638 104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1212 00:37:41.945850 104530 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1212 00:37:42.053680 104530 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1212 00:37:42.164852 104530 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I1212 00:37:42.164906 104530 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1212 00:37:42.181956 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:37:42.292269 104530 ssh_runner.go:195] Run: sudo systemctl restart docker
I1212 00:37:43.762922 104530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.47061306s)
I1212 00:37:43.762999 104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1212 00:37:43.866143 104530 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1212 00:37:43.974469 104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1212 00:37:44.089805 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:37:44.189760 104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1212 00:37:44.203372 104530 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
I1212 00:37:44.203469 104530 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
I1212 00:37:44.213697 104530 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 00:37:32 UTC, ends at Tue 2023-12-12 00:37:44 UTC. --
I1212 00:37:44.213720 104530 command_runner.go:130] > Dec 12 00:37:33 minikube systemd[1]: Starting CRI Docker Socket for the API.
I1212 00:37:44.213727 104530 command_runner.go:130] > Dec 12 00:37:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
I1212 00:37:44.213734 104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: cri-docker.socket: Succeeded.
I1212 00:37:44.213740 104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Closed CRI Docker Socket for the API.
I1212 00:37:44.213747 104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Stopping CRI Docker Socket for the API.
I1212 00:37:44.213755 104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Starting CRI Docker Socket for the API.
I1212 00:37:44.213761 104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Listening on CRI Docker Socket for the API.
I1212 00:37:44.213770 104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: cri-docker.socket: Succeeded.
I1212 00:37:44.213778 104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Closed CRI Docker Socket for the API.
I1212 00:37:44.213786 104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Stopping CRI Docker Socket for the API.
I1212 00:37:44.213794 104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Starting CRI Docker Socket for the API.
I1212 00:37:44.213801 104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Listening on CRI Docker Socket for the API.
I1212 00:37:44.213814 104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
I1212 00:37:44.213828 104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
I1212 00:37:44.213842 104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
I1212 00:37:44.213860 104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Starting CRI Docker Socket for the API.
I1212 00:37:44.213874 104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Listening on CRI Docker Socket for the API.
I1212 00:37:44.213887 104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
I1212 00:37:44.213899 104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
I1212 00:37:44.213913 104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
I1212 00:37:44.213929 104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
I1212 00:37:44.213946 104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
I1212 00:37:44.216418 104530 out.go:177]
W1212 00:37:44.218157 104530 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
sudo journalctl --no-pager -u cri-docker.socket:
-- stdout --
-- Journal begins at Tue 2023-12-12 00:37:32 UTC, ends at Tue 2023-12-12 00:37:44 UTC. --
Dec 12 00:37:33 minikube systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:36 minikube systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:38 minikube systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
sudo journalctl --no-pager -u cri-docker.socket:
-- stdout --
-- Journal begins at Tue 2023-12-12 00:37:32 UTC, ends at Tue 2023-12-12 00:37:44 UTC. --
Dec 12 00:37:33 minikube systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:36 minikube systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:38 minikube systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
-- /stdout --
W1212 00:37:44.218182 104530 out.go:239] *
*
W1212 00:37:44.219022 104530 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1212 00:37:44.221199 104530 out.go:177]
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-859606 --wait=true -v=8 --alsologtostderr --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-859606 -n multinode-859606
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-859606 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-859606 logs -n 25: (1.33359849s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| cp | multinode-859606 cp multinode-859606-m02:/home/docker/cp-test.txt | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606:/home/docker/cp-test_multinode-859606-m02_multinode-859606.txt | | | | | |
| ssh | multinode-859606 ssh -n | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-859606 ssh -n multinode-859606 sudo cat | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | /home/docker/cp-test_multinode-859606-m02_multinode-859606.txt | | | | | |
| cp | multinode-859606 cp multinode-859606-m02:/home/docker/cp-test.txt | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606-m03:/home/docker/cp-test_multinode-859606-m02_multinode-859606-m03.txt | | | | | |
| ssh | multinode-859606 ssh -n | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-859606 ssh -n multinode-859606-m03 sudo cat | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | /home/docker/cp-test_multinode-859606-m02_multinode-859606-m03.txt | | | | | |
| cp | multinode-859606 cp testdata/cp-test.txt | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-859606 ssh -n | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-859606 cp multinode-859606-m03:/home/docker/cp-test.txt | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | /tmp/TestMultiNodeserialCopyFile1229349573/001/cp-test_multinode-859606-m03.txt | | | | | |
| ssh | multinode-859606 ssh -n | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-859606 cp multinode-859606-m03:/home/docker/cp-test.txt | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606:/home/docker/cp-test_multinode-859606-m03_multinode-859606.txt | | | | | |
| ssh | multinode-859606 ssh -n | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-859606 ssh -n multinode-859606 sudo cat | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | /home/docker/cp-test_multinode-859606-m03_multinode-859606.txt | | | | | |
| cp | multinode-859606 cp multinode-859606-m03:/home/docker/cp-test.txt | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606-m02:/home/docker/cp-test_multinode-859606-m03_multinode-859606-m02.txt | | | | | |
| ssh | multinode-859606 ssh -n | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | multinode-859606-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-859606 ssh -n multinode-859606-m02 sudo cat | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| | /home/docker/cp-test_multinode-859606-m03_multinode-859606-m02.txt | | | | | |
| node | multinode-859606 node stop m03 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:32 UTC |
| node | multinode-859606 node start | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:32 UTC | 12 Dec 23 00:33 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-859606 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | |
| stop | -p multinode-859606 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:33 UTC |
| start | -p multinode-859606 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:33 UTC | 12 Dec 23 00:35 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-859606 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:35 UTC | |
| node | multinode-859606 node delete | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:35 UTC | 12 Dec 23 00:35 UTC |
| | m03 | | | | | |
| stop | multinode-859606 stop | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:35 UTC | 12 Dec 23 00:36 UTC |
| start | -p multinode-859606 | multinode-859606 | jenkins | v1.32.0 | 12 Dec 23 00:36 UTC | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/12/12 00:36:19
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.21.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1212 00:36:19.566152 104530 out.go:296] Setting OutFile to fd 1 ...
I1212 00:36:19.566265 104530 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:19.566273 104530 out.go:309] Setting ErrFile to fd 2...
I1212 00:36:19.566277 104530 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:19.566462 104530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17764-80294/.minikube/bin
I1212 00:36:19.566987 104530 out.go:303] Setting JSON to false
I1212 00:36:19.567880 104530 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11880,"bootTime":1702329500,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1212 00:36:19.567966 104530 start.go:138] virtualization: kvm guest
I1212 00:36:19.570536 104530 out.go:177] * [multinode-859606] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I1212 00:36:19.572060 104530 notify.go:220] Checking for updates...
I1212 00:36:19.572071 104530 out.go:177] - MINIKUBE_LOCATION=17764
I1212 00:36:19.573648 104530 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1212 00:36:19.575043 104530 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17764-80294/kubeconfig
I1212 00:36:19.576502 104530 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17764-80294/.minikube
I1212 00:36:19.578073 104530 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1212 00:36:19.579463 104530 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1212 00:36:19.581288 104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:36:19.581767 104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:36:19.581821 104530 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:36:19.596096 104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
I1212 00:36:19.596488 104530 main.go:141] libmachine: () Calling .GetVersion
I1212 00:36:19.597060 104530 main.go:141] libmachine: Using API Version 1
I1212 00:36:19.597091 104530 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:36:19.597481 104530 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:36:19.597646 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:19.597948 104530 driver.go:392] Setting default libvirt URI to qemu:///system
I1212 00:36:19.598247 104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:36:19.598293 104530 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:36:19.612639 104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
I1212 00:36:19.613044 104530 main.go:141] libmachine: () Calling .GetVersion
I1212 00:36:19.613494 104530 main.go:141] libmachine: Using API Version 1
I1212 00:36:19.613515 104530 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:36:19.613814 104530 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:36:19.613998 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:19.648526 104530 out.go:177] * Using the kvm2 driver based on existing profile
I1212 00:36:19.650074 104530 start.go:298] selected driver: kvm2
I1212 00:36:19.650086 104530 start.go:902] validating driver "kvm2" against &{Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false ku
beflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1212 00:36:19.650266 104530 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1212 00:36:19.650710 104530 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:36:19.650794 104530 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17764-80294/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1212 00:36:19.664949 104530 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
I1212 00:36:19.665848 104530 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1212 00:36:19.665938 104530 cni.go:84] Creating CNI manager for ""
I1212 00:36:19.665955 104530 cni.go:136] 2 nodes found, recommending kindnet
I1212 00:36:19.665965 104530 start_flags.go:323] config:
{Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false
nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1212 00:36:19.666224 104530 iso.go:125] acquiring lock: {Name:mk9f395cbf4246894893bf64341667bb412992c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:36:19.668183 104530 out.go:177] * Starting control plane node multinode-859606 in cluster multinode-859606
I1212 00:36:19.669663 104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I1212 00:36:19.669706 104530 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
I1212 00:36:19.669717 104530 cache.go:56] Caching tarball of preloaded images
I1212 00:36:19.669796 104530 preload.go:174] Found /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1212 00:36:19.669808 104530 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I1212 00:36:19.669923 104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
I1212 00:36:19.670107 104530 start.go:365] acquiring machines lock for multinode-859606: {Name:mk381e91746c2e5b8a4620fe3fd447d80375e413 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1212 00:36:19.670157 104530 start.go:369] acquired machines lock for "multinode-859606" in 32.405µs
I1212 00:36:19.670175 104530 start.go:96] Skipping create...Using existing machine configuration
I1212 00:36:19.670183 104530 fix.go:54] fixHost starting:
I1212 00:36:19.670424 104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:36:19.670455 104530 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:36:19.684474 104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
I1212 00:36:19.684891 104530 main.go:141] libmachine: () Calling .GetVersion
I1212 00:36:19.685333 104530 main.go:141] libmachine: Using API Version 1
I1212 00:36:19.685356 104530 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:36:19.685644 104530 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:36:19.685828 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:19.685946 104530 main.go:141] libmachine: (multinode-859606) Calling .GetState
I1212 00:36:19.687411 104530 fix.go:102] recreateIfNeeded on multinode-859606: state=Stopped err=<nil>
I1212 00:36:19.687443 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
W1212 00:36:19.687615 104530 fix.go:128] unexpected machine state, will restart: <nil>
I1212 00:36:19.689763 104530 out.go:177] * Restarting existing kvm2 VM for "multinode-859606" ...
I1212 00:36:19.691324 104530 main.go:141] libmachine: (multinode-859606) Calling .Start
I1212 00:36:19.691550 104530 main.go:141] libmachine: (multinode-859606) Ensuring networks are active...
I1212 00:36:19.692253 104530 main.go:141] libmachine: (multinode-859606) Ensuring network default is active
I1212 00:36:19.692574 104530 main.go:141] libmachine: (multinode-859606) Ensuring network mk-multinode-859606 is active
I1212 00:36:19.692847 104530 main.go:141] libmachine: (multinode-859606) Getting domain xml...
I1212 00:36:19.693505 104530 main.go:141] libmachine: (multinode-859606) Creating domain...
I1212 00:36:20.929419 104530 main.go:141] libmachine: (multinode-859606) Waiting to get IP...
I1212 00:36:20.930523 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:20.930912 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:20.931040 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:20.930906 104565 retry.go:31] will retry after 273.212272ms: waiting for machine to come up
I1212 00:36:21.205460 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:21.205872 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:21.205901 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.205852 104565 retry.go:31] will retry after 326.892458ms: waiting for machine to come up
I1212 00:36:21.534529 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:21.534921 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:21.534943 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.534891 104565 retry.go:31] will retry after 343.135816ms: waiting for machine to come up
I1212 00:36:21.879459 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:21.879929 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:21.879953 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:21.879870 104565 retry.go:31] will retry after 589.671783ms: waiting for machine to come up
I1212 00:36:22.471637 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:22.472097 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:22.472120 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:22.472073 104565 retry.go:31] will retry after 637.139279ms: waiting for machine to come up
I1212 00:36:23.110881 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:23.111236 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:23.111267 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:23.111178 104565 retry.go:31] will retry after 745.620292ms: waiting for machine to come up
I1212 00:36:23.858157 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:23.858677 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:23.858707 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:23.858634 104565 retry.go:31] will retry after 1.181130732s: waiting for machine to come up
I1212 00:36:25.041534 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:25.041972 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:25.042004 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:25.041923 104565 retry.go:31] will retry after 1.339637741s: waiting for machine to come up
I1212 00:36:26.383605 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:26.383992 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:26.384019 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:26.383923 104565 retry.go:31] will retry after 1.520765812s: waiting for machine to come up
I1212 00:36:27.906937 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:27.907387 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:27.907415 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:27.907357 104565 retry.go:31] will retry after 1.874600317s: waiting for machine to come up
I1212 00:36:29.783675 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:29.784134 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:29.784174 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:29.784075 104565 retry.go:31] will retry after 2.274077714s: waiting for machine to come up
I1212 00:36:32.061527 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:32.061959 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:32.061986 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:32.061913 104565 retry.go:31] will retry after 3.21102487s: waiting for machine to come up
I1212 00:36:35.274900 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:35.275327 104530 main.go:141] libmachine: (multinode-859606) DBG | unable to find current IP address of domain multinode-859606 in network mk-multinode-859606
I1212 00:36:35.275356 104530 main.go:141] libmachine: (multinode-859606) DBG | I1212 00:36:35.275295 104565 retry.go:31] will retry after 4.00191762s: waiting for machine to come up
I1212 00:36:39.281352 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.281835 104530 main.go:141] libmachine: (multinode-859606) Found IP for machine: 192.168.39.40
I1212 00:36:39.281858 104530 main.go:141] libmachine: (multinode-859606) Reserving static IP address...
I1212 00:36:39.281874 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has current primary IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.282305 104530 main.go:141] libmachine: (multinode-859606) Reserved static IP address: 192.168.39.40
I1212 00:36:39.282362 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "multinode-859606", mac: "52:54:00:16:26:7f", ip: "192.168.39.40"} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.282382 104530 main.go:141] libmachine: (multinode-859606) Waiting for SSH to be available...
I1212 00:36:39.282413 104530 main.go:141] libmachine: (multinode-859606) DBG | skip adding static IP to network mk-multinode-859606 - found existing host DHCP lease matching {name: "multinode-859606", mac: "52:54:00:16:26:7f", ip: "192.168.39.40"}
I1212 00:36:39.282430 104530 main.go:141] libmachine: (multinode-859606) DBG | Getting to WaitForSSH function...
I1212 00:36:39.284738 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.285057 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.285110 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.285169 104530 main.go:141] libmachine: (multinode-859606) DBG | Using SSH client type: external
I1212 00:36:39.285210 104530 main.go:141] libmachine: (multinode-859606) DBG | Using SSH private key: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa (-rw-------)
I1212 00:36:39.285247 104530 main.go:141] libmachine: (multinode-859606) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa -p 22] /usr/bin/ssh <nil>}
I1212 00:36:39.285259 104530 main.go:141] libmachine: (multinode-859606) DBG | About to run SSH command:
I1212 00:36:39.285268 104530 main.go:141] libmachine: (multinode-859606) DBG | exit 0
I1212 00:36:39.375522 104530 main.go:141] libmachine: (multinode-859606) DBG | SSH cmd err, output: <nil>:
I1212 00:36:39.375955 104530 main.go:141] libmachine: (multinode-859606) Calling .GetConfigRaw
I1212 00:36:39.376683 104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
I1212 00:36:39.379083 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.379448 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.379483 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.379735 104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
I1212 00:36:39.379953 104530 machine.go:88] provisioning docker machine ...
I1212 00:36:39.379970 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:39.380177 104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
I1212 00:36:39.380335 104530 buildroot.go:166] provisioning hostname "multinode-859606"
I1212 00:36:39.380350 104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
I1212 00:36:39.380483 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:39.382706 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.383084 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.383109 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.383231 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:39.383413 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.383548 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.383686 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:39.383852 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:39.384221 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:39.384236 104530 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-859606 && echo "multinode-859606" | sudo tee /etc/hostname
I1212 00:36:39.519767 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-859606
I1212 00:36:39.519800 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:39.522378 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.522790 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.522832 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.522956 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:39.523177 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.523364 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.523491 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:39.523659 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:39.523993 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:39.524011 104530 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-859606' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-859606/g' /etc/hosts;
else
echo '127.0.1.1 multinode-859606' | sudo tee -a /etc/hosts;
fi
fi
I1212 00:36:39.656285 104530 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1212 00:36:39.656370 104530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17764-80294/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-80294/.minikube}
I1212 00:36:39.656408 104530 buildroot.go:174] setting up certificates
I1212 00:36:39.656417 104530 provision.go:83] configureAuth start
I1212 00:36:39.656432 104530 main.go:141] libmachine: (multinode-859606) Calling .GetMachineName
I1212 00:36:39.656702 104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
I1212 00:36:39.659384 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.659735 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.659764 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.659868 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:39.662155 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.662517 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.662547 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.662670 104530 provision.go:138] copyHostCerts
I1212 00:36:39.662701 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
I1212 00:36:39.662745 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem, removing ...
I1212 00:36:39.662764 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
I1212 00:36:39.662840 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem (1078 bytes)
I1212 00:36:39.662932 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
I1212 00:36:39.662954 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem, removing ...
I1212 00:36:39.662963 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
I1212 00:36:39.662998 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem (1123 bytes)
I1212 00:36:39.663072 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
I1212 00:36:39.663106 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem, removing ...
I1212 00:36:39.663115 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
I1212 00:36:39.663149 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem (1679 bytes)
I1212 00:36:39.663211 104530 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem org=jenkins.multinode-859606 san=[192.168.39.40 192.168.39.40 localhost 127.0.0.1 minikube multinode-859606]
I1212 00:36:39.752771 104530 provision.go:172] copyRemoteCerts
I1212 00:36:39.752840 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1212 00:36:39.752864 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:39.755641 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.755981 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.756012 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.756148 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:39.756362 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.756505 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:39.756620 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
I1212 00:36:39.848757 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1212 00:36:39.848827 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1212 00:36:39.872145 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1212 00:36:39.872230 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1212 00:36:39.895524 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem -> /etc/docker/server.pem
I1212 00:36:39.895625 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I1212 00:36:39.919081 104530 provision.go:86] duration metric: configureAuth took 262.648578ms
I1212 00:36:39.919117 104530 buildroot.go:189] setting minikube options for container-runtime
I1212 00:36:39.919362 104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:36:39.919392 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:39.919652 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:39.922322 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.922662 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:39.922694 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:39.922873 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:39.923053 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.923205 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:39.923322 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:39.923479 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:39.923797 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:39.923808 104530 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1212 00:36:40.049654 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I1212 00:36:40.049683 104530 buildroot.go:70] root file system type: tmpfs
I1212 00:36:40.049826 104530 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1212 00:36:40.049854 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:40.052273 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:40.052615 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:40.052648 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:40.052798 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:40.053014 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:40.053178 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:40.053328 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:40.053470 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:40.053822 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:40.053890 104530 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1212 00:36:40.188800 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1212 00:36:40.188832 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:40.191559 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:40.191974 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:40.192007 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:40.192190 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:40.192371 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:40.192563 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:40.192665 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:40.192866 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:40.193267 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:40.193286 104530 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1212 00:36:41.206767 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1212 00:36:41.206800 104530 machine.go:91] provisioned docker machine in 1.826833328s
I1212 00:36:41.206817 104530 start.go:300] post-start starting for "multinode-859606" (driver="kvm2")
I1212 00:36:41.206830 104530 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1212 00:36:41.206852 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:41.207178 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1212 00:36:41.207202 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:41.209997 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.210348 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:41.210381 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.210498 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:41.210690 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:41.210833 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:41.210981 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
I1212 00:36:41.301876 104530 ssh_runner.go:195] Run: cat /etc/os-release
I1212 00:36:41.306227 104530 command_runner.go:130] > NAME=Buildroot
I1212 00:36:41.306246 104530 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
I1212 00:36:41.306250 104530 command_runner.go:130] > ID=buildroot
I1212 00:36:41.306262 104530 command_runner.go:130] > VERSION_ID=2021.02.12
I1212 00:36:41.306266 104530 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I1212 00:36:41.306469 104530 info.go:137] Remote host: Buildroot 2021.02.12
I1212 00:36:41.306487 104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/addons for local assets ...
I1212 00:36:41.306534 104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/files for local assets ...
I1212 00:36:41.306599 104530 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> 876092.pem in /etc/ssl/certs
I1212 00:36:41.306609 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /etc/ssl/certs/876092.pem
I1212 00:36:41.306693 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1212 00:36:41.315869 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /etc/ssl/certs/876092.pem (1708 bytes)
I1212 00:36:41.338667 104530 start.go:303] post-start completed in 131.83456ms
I1212 00:36:41.338691 104530 fix.go:56] fixHost completed within 21.668507657s
I1212 00:36:41.338718 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:41.341292 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.341664 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:41.341694 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.341888 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:41.342101 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:41.342241 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:41.342408 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:41.342541 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:36:41.342886 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.40 22 <nil> <nil>}
I1212 00:36:41.342902 104530 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I1212 00:36:41.468622 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702341401.415199028
I1212 00:36:41.468653 104530 fix.go:206] guest clock: 1702341401.415199028
I1212 00:36:41.468663 104530 fix.go:219] Guest: 2023-12-12 00:36:41.415199028 +0000 UTC Remote: 2023-12-12 00:36:41.338694258 +0000 UTC m=+21.821939649 (delta=76.50477ms)
I1212 00:36:41.468688 104530 fix.go:190] guest clock delta is within tolerance: 76.50477ms
I1212 00:36:41.468695 104530 start.go:83] releasing machines lock for "multinode-859606", held for 21.798528151s
I1212 00:36:41.468721 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:41.469036 104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
I1212 00:36:41.471587 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.471996 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:41.472029 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.472196 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:41.472679 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:41.472871 104530 main.go:141] libmachine: (multinode-859606) Calling .DriverName
I1212 00:36:41.472969 104530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1212 00:36:41.473018 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:41.473104 104530 ssh_runner.go:195] Run: cat /version.json
I1212 00:36:41.473135 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHHostname
I1212 00:36:41.475372 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.475531 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.475739 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:41.475765 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.475949 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:41.475965 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:41.475979 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:41.476148 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHPort
I1212 00:36:41.476167 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:41.476322 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:41.476325 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHKeyPath
I1212 00:36:41.476507 104530 main.go:141] libmachine: (multinode-859606) Calling .GetSSHUsername
I1212 00:36:41.476503 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
I1212 00:36:41.476677 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606/id_rsa Username:docker}
I1212 00:36:41.586671 104530 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I1212 00:36:41.587519 104530 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
I1212 00:36:41.587648 104530 ssh_runner.go:195] Run: systemctl --version
I1212 00:36:41.593336 104530 command_runner.go:130] > systemd 247 (247)
I1212 00:36:41.593360 104530 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I1212 00:36:41.593423 104530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1212 00:36:41.598984 104530 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W1212 00:36:41.599019 104530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1212 00:36:41.599060 104530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1212 00:36:41.614960 104530 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I1212 00:36:41.614996 104530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1212 00:36:41.615008 104530 start.go:475] detecting cgroup driver to use...
I1212 00:36:41.615155 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:36:41.631749 104530 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I1212 00:36:41.632091 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I1212 00:36:41.642135 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1212 00:36:41.651964 104530 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1212 00:36:41.652033 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1212 00:36:41.661909 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:36:41.672216 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1212 00:36:41.681323 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:36:41.691358 104530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1212 00:36:41.701487 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1212 00:36:41.711473 104530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1212 00:36:41.720346 104530 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I1212 00:36:41.720490 104530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1212 00:36:41.729603 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:36:41.829613 104530 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1212 00:36:41.846807 104530 start.go:475] detecting cgroup driver to use...
I1212 00:36:41.846894 104530 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1212 00:36:41.859661 104530 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I1212 00:36:41.860603 104530 command_runner.go:130] > [Unit]
I1212 00:36:41.860621 104530 command_runner.go:130] > Description=Docker Application Container Engine
I1212 00:36:41.860629 104530 command_runner.go:130] > Documentation=https://docs.docker.com
I1212 00:36:41.860638 104530 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I1212 00:36:41.860648 104530 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I1212 00:36:41.860662 104530 command_runner.go:130] > StartLimitBurst=3
I1212 00:36:41.860671 104530 command_runner.go:130] > StartLimitIntervalSec=60
I1212 00:36:41.860679 104530 command_runner.go:130] > [Service]
I1212 00:36:41.860686 104530 command_runner.go:130] > Type=notify
I1212 00:36:41.860694 104530 command_runner.go:130] > Restart=on-failure
I1212 00:36:41.860715 104530 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I1212 00:36:41.860734 104530 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I1212 00:36:41.860748 104530 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I1212 00:36:41.860757 104530 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I1212 00:36:41.860767 104530 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I1212 00:36:41.860781 104530 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I1212 00:36:41.860791 104530 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I1212 00:36:41.860803 104530 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I1212 00:36:41.860812 104530 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I1212 00:36:41.860818 104530 command_runner.go:130] > ExecStart=
I1212 00:36:41.860837 104530 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I1212 00:36:41.860845 104530 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I1212 00:36:41.860854 104530 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I1212 00:36:41.860863 104530 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I1212 00:36:41.860867 104530 command_runner.go:130] > LimitNOFILE=infinity
I1212 00:36:41.860872 104530 command_runner.go:130] > LimitNPROC=infinity
I1212 00:36:41.860876 104530 command_runner.go:130] > LimitCORE=infinity
I1212 00:36:41.860881 104530 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I1212 00:36:41.860886 104530 command_runner.go:130] > # Only systemd 226 and above support this version.
I1212 00:36:41.860893 104530 command_runner.go:130] > TasksMax=infinity
I1212 00:36:41.860897 104530 command_runner.go:130] > TimeoutStartSec=0
I1212 00:36:41.860903 104530 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I1212 00:36:41.860907 104530 command_runner.go:130] > Delegate=yes
I1212 00:36:41.860912 104530 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I1212 00:36:41.860916 104530 command_runner.go:130] > KillMode=process
I1212 00:36:41.860921 104530 command_runner.go:130] > [Install]
I1212 00:36:41.860934 104530 command_runner.go:130] > WantedBy=multi-user.target
I1212 00:36:41.861408 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1212 00:36:41.875266 104530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1212 00:36:41.894559 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1212 00:36:41.907084 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:36:41.919502 104530 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1212 00:36:41.951570 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:36:41.963632 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:36:41.980713 104530 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I1212 00:36:41.980788 104530 ssh_runner.go:195] Run: which cri-dockerd
I1212 00:36:41.984334 104530 command_runner.go:130] > /usr/bin/cri-dockerd
I1212 00:36:41.984645 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1212 00:36:41.993852 104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1212 00:36:42.009538 104530 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1212 00:36:42.118265 104530 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1212 00:36:42.228976 104530 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I1212 00:36:42.229126 104530 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1212 00:36:42.245311 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:36:42.345292 104530 ssh_runner.go:195] Run: sudo systemctl restart docker
I1212 00:36:43.830127 104530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.484785426s)
I1212 00:36:43.830211 104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1212 00:36:43.943279 104530 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1212 00:36:44.053942 104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1212 00:36:44.164844 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:36:44.275934 104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1212 00:36:44.291963 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:36:44.392776 104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I1212 00:36:44.474244 104530 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1212 00:36:44.474311 104530 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1212 00:36:44.480515 104530 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I1212 00:36:44.480535 104530 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I1212 00:36:44.480541 104530 command_runner.go:130] > Device: 16h/22d Inode: 819 Links: 1
I1212 00:36:44.480548 104530 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I1212 00:36:44.480554 104530 command_runner.go:130] > Access: 2023-12-12 00:36:44.352977075 +0000
I1212 00:36:44.480559 104530 command_runner.go:130] > Modify: 2023-12-12 00:36:44.352977075 +0000
I1212 00:36:44.480564 104530 command_runner.go:130] > Change: 2023-12-12 00:36:44.355977075 +0000
I1212 00:36:44.480567 104530 command_runner.go:130] > Birth: -
I1212 00:36:44.480717 104530 start.go:543] Will wait 60s for crictl version
I1212 00:36:44.480773 104530 ssh_runner.go:195] Run: which crictl
I1212 00:36:44.484627 104530 command_runner.go:130] > /usr/bin/crictl
I1212 00:36:44.484837 104530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1212 00:36:44.546652 104530 command_runner.go:130] > Version: 0.1.0
I1212 00:36:44.546684 104530 command_runner.go:130] > RuntimeName: docker
I1212 00:36:44.546692 104530 command_runner.go:130] > RuntimeVersion: 24.0.7
I1212 00:36:44.546719 104530 command_runner.go:130] > RuntimeApiVersion: v1
I1212 00:36:44.548311 104530 start.go:559] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.7
RuntimeApiVersion: v1
I1212 00:36:44.548389 104530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1212 00:36:44.576456 104530 command_runner.go:130] > 24.0.7
I1212 00:36:44.576586 104530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1212 00:36:44.599730 104530 command_runner.go:130] > 24.0.7
I1212 00:36:44.602571 104530 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
I1212 00:36:44.602615 104530 main.go:141] libmachine: (multinode-859606) Calling .GetIP
I1212 00:36:44.605105 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:44.605567 104530 main.go:141] libmachine: (multinode-859606) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:26:7f", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:36:32 +0000 UTC Type:0 Mac:52:54:00:16:26:7f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:multinode-859606 Clientid:01:52:54:00:16:26:7f}
I1212 00:36:44.605594 104530 main.go:141] libmachine: (multinode-859606) DBG | domain multinode-859606 has defined IP address 192.168.39.40 and MAC address 52:54:00:16:26:7f in network mk-multinode-859606
I1212 00:36:44.605828 104530 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1212 00:36:44.609867 104530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 00:36:44.622768 104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I1212 00:36:44.622818 104530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1212 00:36:44.642692 104530 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
I1212 00:36:44.642720 104530 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
I1212 00:36:44.642729 104530 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
I1212 00:36:44.642749 104530 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
I1212 00:36:44.642756 104530 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I1212 00:36:44.642764 104530 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I1212 00:36:44.642773 104530 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I1212 00:36:44.642785 104530 command_runner.go:130] > registry.k8s.io/pause:3.9
I1212 00:36:44.642793 104530 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I1212 00:36:44.642804 104530 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I1212 00:36:44.642841 104530 docker.go:671] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-scheduler:v1.28.4
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I1212 00:36:44.642858 104530 docker.go:601] Images already preloaded, skipping extraction
I1212 00:36:44.642930 104530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1212 00:36:44.661008 104530 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
I1212 00:36:44.661047 104530 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
I1212 00:36:44.661054 104530 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
I1212 00:36:44.661062 104530 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
I1212 00:36:44.661068 104530 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
I1212 00:36:44.661084 104530 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I1212 00:36:44.661093 104530 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I1212 00:36:44.661108 104530 command_runner.go:130] > registry.k8s.io/pause:3.9
I1212 00:36:44.661116 104530 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I1212 00:36:44.661126 104530 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I1212 00:36:44.661894 104530 docker.go:671] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-scheduler:v1.28.4
kindest/kindnetd:v20230809-80a64d96
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I1212 00:36:44.661911 104530 cache_images.go:84] Images are preloaded, skipping loading
I1212 00:36:44.661965 104530 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1212 00:36:44.688198 104530 command_runner.go:130] > cgroupfs
I1212 00:36:44.688431 104530 cni.go:84] Creating CNI manager for ""
I1212 00:36:44.688451 104530 cni.go:136] 2 nodes found, recommending kindnet
I1212 00:36:44.688483 104530 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1212 00:36:44.688527 104530 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.40 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-859606 NodeName:multinode-859606 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1212 00:36:44.688714 104530 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.40
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-859606"
kubeletExtraArgs:
node-ip: 192.168.39.40
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.40"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1212 00:36:44.688816 104530 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-859606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.40
[Install]
config:
{KubernetesVersion:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1212 00:36:44.688879 104530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
I1212 00:36:44.697808 104530 command_runner.go:130] > kubeadm
I1212 00:36:44.697826 104530 command_runner.go:130] > kubectl
I1212 00:36:44.697831 104530 command_runner.go:130] > kubelet
I1212 00:36:44.697894 104530 binaries.go:44] Found k8s binaries, skipping transfer
I1212 00:36:44.697957 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1212 00:36:44.705971 104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
I1212 00:36:44.720935 104530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1212 00:36:44.735886 104530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
I1212 00:36:44.751846 104530 ssh_runner.go:195] Run: grep 192.168.39.40 control-plane.minikube.internal$ /etc/hosts
I1212 00:36:44.755479 104530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.40 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 00:36:44.767240 104530 certs.go:56] Setting up /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606 for IP: 192.168.39.40
I1212 00:36:44.767277 104530 certs.go:190] acquiring lock for shared ca certs: {Name:mk30ad7b34272eb8ac2c2d0da18d8d4f87fa28a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:36:44.767442 104530 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key
I1212 00:36:44.767492 104530 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key
I1212 00:36:44.767569 104530 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key
I1212 00:36:44.767614 104530 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key.7fcbe345
I1212 00:36:44.767658 104530 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key
I1212 00:36:44.767671 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1212 00:36:44.767685 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1212 00:36:44.767697 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1212 00:36:44.767709 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1212 00:36:44.767723 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1212 00:36:44.767736 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1212 00:36:44.767748 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1212 00:36:44.767759 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1212 00:36:44.767806 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem (1338 bytes)
W1212 00:36:44.767833 104530 certs.go:433] ignoring /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609_empty.pem, impossibly tiny 0 bytes
I1212 00:36:44.767842 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem (1679 bytes)
I1212 00:36:44.767866 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem (1078 bytes)
I1212 00:36:44.767895 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem (1123 bytes)
I1212 00:36:44.767941 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem (1679 bytes)
I1212 00:36:44.767991 104530 certs.go:437] found cert: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem (1708 bytes)
I1212 00:36:44.768017 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /usr/share/ca-certificates/876092.pem
I1212 00:36:44.768033 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1212 00:36:44.768048 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem -> /usr/share/ca-certificates/87609.pem
I1212 00:36:44.768657 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1212 00:36:44.791629 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1212 00:36:44.814579 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1212 00:36:44.837176 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1212 00:36:44.859769 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1212 00:36:44.882517 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1212 00:36:44.905279 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1212 00:36:44.927814 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1212 00:36:44.950936 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /usr/share/ca-certificates/876092.pem (1708 bytes)
I1212 00:36:44.973314 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1212 00:36:44.995879 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/87609.pem --> /usr/share/ca-certificates/87609.pem (1338 bytes)
I1212 00:36:45.018814 104530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1212 00:36:45.034741 104530 ssh_runner.go:195] Run: openssl version
I1212 00:36:45.040084 104530 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I1212 00:36:45.040159 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1212 00:36:45.049710 104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1212 00:36:45.054223 104530 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
I1212 00:36:45.054253 104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 00:11 /usr/share/ca-certificates/minikubeCA.pem
I1212 00:36:45.054292 104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1212 00:36:45.059527 104530 command_runner.go:130] > b5213941
I1212 00:36:45.059696 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1212 00:36:45.069012 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/87609.pem && ln -fs /usr/share/ca-certificates/87609.pem /etc/ssl/certs/87609.pem"
I1212 00:36:45.078693 104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/87609.pem
I1212 00:36:45.083070 104530 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:16 /usr/share/ca-certificates/87609.pem
I1212 00:36:45.083289 104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:16 /usr/share/ca-certificates/87609.pem
I1212 00:36:45.083354 104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/87609.pem
I1212 00:36:45.089122 104530 command_runner.go:130] > 51391683
I1212 00:36:45.089194 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/87609.pem /etc/ssl/certs/51391683.0"
I1212 00:36:45.099154 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/876092.pem && ln -fs /usr/share/ca-certificates/876092.pem /etc/ssl/certs/876092.pem"
I1212 00:36:45.108823 104530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/876092.pem
I1212 00:36:45.113316 104530 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:16 /usr/share/ca-certificates/876092.pem
I1212 00:36:45.113568 104530 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:16 /usr/share/ca-certificates/876092.pem
I1212 00:36:45.113613 104530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/876092.pem
I1212 00:36:45.118966 104530 command_runner.go:130] > 3ec20f2e
I1212 00:36:45.119043 104530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/876092.pem /etc/ssl/certs/3ec20f2e.0"
I1212 00:36:45.128635 104530 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I1212 00:36:45.132978 104530 command_runner.go:130] > ca.crt
I1212 00:36:45.132994 104530 command_runner.go:130] > ca.key
I1212 00:36:45.133000 104530 command_runner.go:130] > healthcheck-client.crt
I1212 00:36:45.133004 104530 command_runner.go:130] > healthcheck-client.key
I1212 00:36:45.133008 104530 command_runner.go:130] > peer.crt
I1212 00:36:45.133014 104530 command_runner.go:130] > peer.key
I1212 00:36:45.133018 104530 command_runner.go:130] > server.crt
I1212 00:36:45.133022 104530 command_runner.go:130] > server.key
I1212 00:36:45.133062 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1212 00:36:45.138700 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.138753 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1212 00:36:45.143928 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.143989 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1212 00:36:45.149974 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.150040 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1212 00:36:45.155645 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.155702 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1212 00:36:45.161120 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.161172 104530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1212 00:36:45.166435 104530 command_runner.go:130] > Certificate will not expire
I1212 00:36:45.166596 104530 kubeadm.go:404] StartCluster: {Name:multinode-859606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-859606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubev
irt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1212 00:36:45.166771 104530 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1212 00:36:45.186362 104530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1212 00:36:45.195450 104530 command_runner.go:130] > /var/lib/kubelet/config.yaml
I1212 00:36:45.195478 104530 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
I1212 00:36:45.195486 104530 command_runner.go:130] > /var/lib/minikube/etcd:
I1212 00:36:45.195492 104530 command_runner.go:130] > member
I1212 00:36:45.195591 104530 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I1212 00:36:45.195612 104530 kubeadm.go:636] restartCluster start
I1212 00:36:45.195674 104530 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1212 00:36:45.205557 104530 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1212 00:36:45.205994 104530 kubeconfig.go:135] verify returned: extract IP: "multinode-859606" does not appear in /home/jenkins/minikube-integration/17764-80294/kubeconfig
I1212 00:36:45.206105 104530 kubeconfig.go:146] "multinode-859606" context is missing from /home/jenkins/minikube-integration/17764-80294/kubeconfig - will repair!
I1212 00:36:45.206407 104530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-80294/kubeconfig: {Name:mkf7cdfdedbee22114abcb4b16af22e84438f3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:36:45.206781 104530 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17764-80294/kubeconfig
I1212 00:36:45.207021 104530 kapi.go:59] client config for multinode-859606: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key", CAFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1212 00:36:45.207626 104530 cert_rotation.go:137] Starting client certificate rotation controller
I1212 00:36:45.207759 104530 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1212 00:36:45.216109 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:45.216158 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:45.227128 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:45.227145 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:45.227181 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:45.237721 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:45.738433 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:45.738513 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:45.749916 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:46.238556 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:46.238626 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:46.249796 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:46.738436 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:46.738510 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:46.750275 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:47.238820 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:47.238918 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:47.250330 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:47.737880 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:47.737967 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:47.749173 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:48.238871 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:48.238981 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:48.250477 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:48.737907 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:48.737986 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:48.749969 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:49.238635 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:49.238729 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:49.250296 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:49.738397 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:49.738483 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:49.750014 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:50.238638 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:50.238725 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:50.250537 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:50.738104 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:50.738212 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:50.749728 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:51.238279 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:51.238383 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:51.249977 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:51.738590 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:51.738674 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:51.750353 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:52.237967 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:52.238033 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:52.249749 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:52.738311 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:52.738400 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:52.749734 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:53.238473 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:53.238570 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:53.249803 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:53.738439 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:53.738545 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:53.749846 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:54.238458 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:54.238551 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:54.250276 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:54.738396 104530 api_server.go:166] Checking apiserver status ...
I1212 00:36:54.738477 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1212 00:36:54.749594 104530 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1212 00:36:55.216372 104530 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I1212 00:36:55.216413 104530 kubeadm.go:1135] stopping kube-system containers ...
I1212 00:36:55.216471 104530 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1212 00:36:55.242800 104530 command_runner.go:130] > abde5ad85d4a
I1212 00:36:55.242825 104530 command_runner.go:130] > 6960e84b00b8
I1212 00:36:55.242831 104530 command_runner.go:130] > 55413175770e
I1212 00:36:55.242840 104530 command_runner.go:130] > 56fd6254d6e1
I1212 00:36:55.242847 104530 command_runner.go:130] > b63a75f45416
I1212 00:36:55.242852 104530 command_runner.go:130] > 19421dc21753
I1212 00:36:55.242858 104530 command_runner.go:130] > ecfcbd586321
I1212 00:36:55.242864 104530 command_runner.go:130] > 9767a413586e
I1212 00:36:55.242869 104530 command_runner.go:130] > 4ba778c674f0
I1212 00:36:55.242874 104530 command_runner.go:130] > 19f9d76e8f1c
I1212 00:36:55.242880 104530 command_runner.go:130] > fc27b8583502
I1212 00:36:55.242885 104530 command_runner.go:130] > a49117d4a4c8
I1212 00:36:55.242891 104530 command_runner.go:130] > 5aa25d818283
I1212 00:36:55.242897 104530 command_runner.go:130] > ed0cff49857f
I1212 00:36:55.242904 104530 command_runner.go:130] > 510b18b7b6d6
I1212 00:36:55.242914 104530 command_runner.go:130] > 34ac7e63ee51
I1212 00:36:55.242922 104530 command_runner.go:130] > dc5d8378ca26
I1212 00:36:55.242929 104530 command_runner.go:130] > 335bd2869121
I1212 00:36:55.242939 104530 command_runner.go:130] > 10ca85c531dc
I1212 00:36:55.242951 104530 command_runner.go:130] > dcead5249b2f
I1212 00:36:55.242961 104530 command_runner.go:130] > c3360b039380
I1212 00:36:55.242971 104530 command_runner.go:130] > 08edfeaa5cab
I1212 00:36:55.242979 104530 command_runner.go:130] > 5c674269e2eb
I1212 00:36:55.242986 104530 command_runner.go:130] > e80fc43dacae
I1212 00:36:55.242994 104530 command_runner.go:130] > 547ce8660107
I1212 00:36:55.243001 104530 command_runner.go:130] > 6fce6e649e1a
I1212 00:36:55.243008 104530 command_runner.go:130] > 7db8deb95763
I1212 00:36:55.243015 104530 command_runner.go:130] > fef547bfcef9
I1212 00:36:55.243026 104530 command_runner.go:130] > afcf416fd476
I1212 00:36:55.243035 104530 command_runner.go:130] > d42aca9dd643
I1212 00:36:55.243041 104530 command_runner.go:130] > 757215f5e48f
I1212 00:36:55.243048 104530 command_runner.go:130] > f785241ab5c9
I1212 00:36:55.243103 104530 docker.go:469] Stopping containers: [abde5ad85d4a 6960e84b00b8 55413175770e 56fd6254d6e1 b63a75f45416 19421dc21753 ecfcbd586321 9767a413586e 4ba778c674f0 19f9d76e8f1c fc27b8583502 a49117d4a4c8 5aa25d818283 ed0cff49857f 510b18b7b6d6 34ac7e63ee51 dc5d8378ca26 335bd2869121 10ca85c531dc dcead5249b2f c3360b039380 08edfeaa5cab 5c674269e2eb e80fc43dacae 547ce8660107 6fce6e649e1a 7db8deb95763 fef547bfcef9 afcf416fd476 d42aca9dd643 757215f5e48f f785241ab5c9]
I1212 00:36:55.243180 104530 ssh_runner.go:195] Run: docker stop abde5ad85d4a 6960e84b00b8 55413175770e 56fd6254d6e1 b63a75f45416 19421dc21753 ecfcbd586321 9767a413586e 4ba778c674f0 19f9d76e8f1c fc27b8583502 a49117d4a4c8 5aa25d818283 ed0cff49857f 510b18b7b6d6 34ac7e63ee51 dc5d8378ca26 335bd2869121 10ca85c531dc dcead5249b2f c3360b039380 08edfeaa5cab 5c674269e2eb e80fc43dacae 547ce8660107 6fce6e649e1a 7db8deb95763 fef547bfcef9 afcf416fd476 d42aca9dd643 757215f5e48f f785241ab5c9
I1212 00:36:55.267560 104530 command_runner.go:130] > abde5ad85d4a
I1212 00:36:55.267589 104530 command_runner.go:130] > 6960e84b00b8
I1212 00:36:55.267595 104530 command_runner.go:130] > 55413175770e
I1212 00:36:55.267601 104530 command_runner.go:130] > 56fd6254d6e1
I1212 00:36:55.267608 104530 command_runner.go:130] > b63a75f45416
I1212 00:36:55.267613 104530 command_runner.go:130] > 19421dc21753
I1212 00:36:55.267630 104530 command_runner.go:130] > ecfcbd586321
I1212 00:36:55.267637 104530 command_runner.go:130] > 9767a413586e
I1212 00:36:55.267643 104530 command_runner.go:130] > 4ba778c674f0
I1212 00:36:55.267650 104530 command_runner.go:130] > 19f9d76e8f1c
I1212 00:36:55.267656 104530 command_runner.go:130] > fc27b8583502
I1212 00:36:55.267666 104530 command_runner.go:130] > a49117d4a4c8
I1212 00:36:55.267672 104530 command_runner.go:130] > 5aa25d818283
I1212 00:36:55.267679 104530 command_runner.go:130] > ed0cff49857f
I1212 00:36:55.267707 104530 command_runner.go:130] > 510b18b7b6d6
I1212 00:36:55.267723 104530 command_runner.go:130] > 34ac7e63ee51
I1212 00:36:55.267729 104530 command_runner.go:130] > dc5d8378ca26
I1212 00:36:55.267735 104530 command_runner.go:130] > 335bd2869121
I1212 00:36:55.267742 104530 command_runner.go:130] > 10ca85c531dc
I1212 00:36:55.267757 104530 command_runner.go:130] > dcead5249b2f
I1212 00:36:55.267764 104530 command_runner.go:130] > c3360b039380
I1212 00:36:55.267770 104530 command_runner.go:130] > 08edfeaa5cab
I1212 00:36:55.267779 104530 command_runner.go:130] > 5c674269e2eb
I1212 00:36:55.267785 104530 command_runner.go:130] > e80fc43dacae
I1212 00:36:55.267798 104530 command_runner.go:130] > 547ce8660107
I1212 00:36:55.267807 104530 command_runner.go:130] > 6fce6e649e1a
I1212 00:36:55.267816 104530 command_runner.go:130] > 7db8deb95763
I1212 00:36:55.267825 104530 command_runner.go:130] > fef547bfcef9
I1212 00:36:55.267834 104530 command_runner.go:130] > afcf416fd476
I1212 00:36:55.267843 104530 command_runner.go:130] > d42aca9dd643
I1212 00:36:55.267852 104530 command_runner.go:130] > 757215f5e48f
I1212 00:36:55.267861 104530 command_runner.go:130] > f785241ab5c9
I1212 00:36:55.268959 104530 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1212 00:36:55.283176 104530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1212 00:36:55.291931 104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I1212 00:36:55.291964 104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I1212 00:36:55.291973 104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I1212 00:36:55.291980 104530 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1212 00:36:55.292025 104530 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1212 00:36:55.292077 104530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1212 00:36:55.300972 104530 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1212 00:36:55.300994 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:36:55.409847 104530 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1212 00:36:55.410210 104530 command_runner.go:130] > [certs] Using existing ca certificate authority
I1212 00:36:55.410700 104530 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I1212 00:36:55.411130 104530 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1212 00:36:55.411654 104530 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
I1212 00:36:55.412107 104530 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
I1212 00:36:55.413059 104530 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
I1212 00:36:55.413464 104530 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
I1212 00:36:55.413846 104530 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
I1212 00:36:55.414303 104530 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1212 00:36:55.414667 104530 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I1212 00:36:55.416560 104530 command_runner.go:130] > [certs] Using the existing "sa" key
I1212 00:36:55.416642 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:36:56.211128 104530 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1212 00:36:56.211154 104530 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I1212 00:36:56.211167 104530 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1212 00:36:56.211176 104530 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1212 00:36:56.211190 104530 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1212 00:36:56.211225 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:36:56.277692 104530 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1212 00:36:56.278847 104530 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1212 00:36:56.278889 104530 command_runner.go:130] > [kubelet-start] Starting the kubelet
I1212 00:36:56.393138 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:36:56.490674 104530 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1212 00:36:56.490707 104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I1212 00:36:56.495141 104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1212 00:36:56.496969 104530 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I1212 00:36:56.505734 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:36:56.568063 104530 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1212 00:36:56.574809 104530 api_server.go:52] waiting for apiserver process to appear ...
I1212 00:36:56.574879 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:56.587806 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:57.100023 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:57.600145 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:58.099727 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:58.599716 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:59.099714 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:36:59.599934 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:37:00.099594 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:37:00.117319 104530 command_runner.go:130] > 1800
I1212 00:37:00.117686 104530 api_server.go:72] duration metric: took 3.542880083s to wait for apiserver process to appear ...
I1212 00:37:00.117709 104530 api_server.go:88] waiting for apiserver healthz status ...
I1212 00:37:00.117727 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:02.771626 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1212 00:37:02.771661 104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1212 00:37:02.771677 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:02.838010 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1212 00:37:02.838048 104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1212 00:37:03.338843 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:03.344825 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1212 00:37:03.344863 104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1212 00:37:03.838231 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:03.845511 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W1212 00:37:03.845548 104530 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I1212 00:37:04.339177 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:04.344349 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
ok
I1212 00:37:04.344445 104530 round_trippers.go:463] GET https://192.168.39.40:8443/version
I1212 00:37:04.344456 104530 round_trippers.go:469] Request Headers:
I1212 00:37:04.344469 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:04.344482 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:04.352515 104530 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I1212 00:37:04.352546 104530 round_trippers.go:577] Response Headers:
I1212 00:37:04.352557 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:04.352567 104530 round_trippers.go:580] Content-Length: 264
I1212 00:37:04.352575 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:04 GMT
I1212 00:37:04.352584 104530 round_trippers.go:580] Audit-Id: 63ee9643-66fd-4e1a-a212-0e71234e47a2
I1212 00:37:04.352591 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:04.352598 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:04.352608 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:04.352649 104530 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.4",
"gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
"gitTreeState": "clean",
"buildDate": "2023-11-15T16:48:54Z",
"goVersion": "go1.20.11",
"compiler": "gc",
"platform": "linux/amd64"
}
I1212 00:37:04.352786 104530 api_server.go:141] control plane version: v1.28.4
I1212 00:37:04.352817 104530 api_server.go:131] duration metric: took 4.235100574s to wait for apiserver health ...
I1212 00:37:04.352829 104530 cni.go:84] Creating CNI manager for ""
I1212 00:37:04.352840 104530 cni.go:136] 2 nodes found, recommending kindnet
I1212 00:37:04.355105 104530 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1212 00:37:04.356881 104530 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1212 00:37:04.363840 104530 command_runner.go:130] > File: /opt/cni/bin/portmap
I1212 00:37:04.363876 104530 command_runner.go:130] > Size: 2615256 Blocks: 5112 IO Block: 4096 regular file
I1212 00:37:04.363888 104530 command_runner.go:130] > Device: 11h/17d Inode: 3544 Links: 1
I1212 00:37:04.363897 104530 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I1212 00:37:04.363932 104530 command_runner.go:130] > Access: 2023-12-12 00:36:32.475977075 +0000
I1212 00:37:04.363942 104530 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
I1212 00:37:04.363949 104530 command_runner.go:130] > Change: 2023-12-12 00:36:30.674977075 +0000
I1212 00:37:04.363955 104530 command_runner.go:130] > Birth: -
I1212 00:37:04.364014 104530 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
I1212 00:37:04.364031 104530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I1212 00:37:04.384536 104530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1212 00:37:05.836837 104530 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I1212 00:37:05.848426 104530 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I1212 00:37:05.852488 104530 command_runner.go:130] > serviceaccount/kindnet unchanged
I1212 00:37:05.879402 104530 command_runner.go:130] > daemonset.apps/kindnet configured
I1212 00:37:05.888362 104530 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.503791012s)
I1212 00:37:05.888392 104530 system_pods.go:43] waiting for kube-system pods to appear ...
I1212 00:37:05.888502 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:05.888513 104530 round_trippers.go:469] Request Headers:
I1212 00:37:05.888524 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:05.888534 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:05.893619 104530 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I1212 00:37:05.893657 104530 round_trippers.go:577] Response Headers:
I1212 00:37:05.893666 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:05.893674 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:05.893682 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:05.893690 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:05.893699 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:05 GMT
I1212 00:37:05.893708 104530 round_trippers.go:580] Audit-Id: 0f783734-4de0-49f4-945d-a630ecccf305
I1212 00:37:05.895980 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1199"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
I1212 00:37:05.900061 104530 system_pods.go:59] 12 kube-system pods found
I1212 00:37:05.900092 104530 system_pods.go:61] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1212 00:37:05.900101 104530 system_pods.go:61] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1212 00:37:05.900106 104530 system_pods.go:61] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
I1212 00:37:05.900109 104530 system_pods.go:61] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
I1212 00:37:05.900116 104530 system_pods.go:61] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1212 00:37:05.900123 104530 system_pods.go:61] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1212 00:37:05.900135 104530 system_pods.go:61] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1212 00:37:05.900155 104530 system_pods.go:61] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
I1212 00:37:05.900164 104530 system_pods.go:61] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1212 00:37:05.900171 104530 system_pods.go:61] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
I1212 00:37:05.900176 104530 system_pods.go:61] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1212 00:37:05.900188 104530 system_pods.go:61] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1212 00:37:05.900194 104530 system_pods.go:74] duration metric: took 11.796772ms to wait for pod list to return data ...
I1212 00:37:05.900203 104530 node_conditions.go:102] verifying NodePressure condition ...
I1212 00:37:05.900268 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes
I1212 00:37:05.900277 104530 round_trippers.go:469] Request Headers:
I1212 00:37:05.900284 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:05.900293 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:05.902944 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:05.902977 104530 round_trippers.go:577] Response Headers:
I1212 00:37:05.902987 104530 round_trippers.go:580] Audit-Id: 81b09a2b-85f5-497e-b79a-4f9569b9a2e7
I1212 00:37:05.903000 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:05.903011 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:05.903018 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:05.903031 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:05.903044 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:05 GMT
I1212 00:37:05.903213 104530 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1199"},"items":[{"metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10135 chars]
I1212 00:37:05.903891 104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1212 00:37:05.903937 104530 node_conditions.go:123] node cpu capacity is 2
I1212 00:37:05.903961 104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1212 00:37:05.903967 104530 node_conditions.go:123] node cpu capacity is 2
I1212 00:37:05.903974 104530 node_conditions.go:105] duration metric: took 3.766372ms to run NodePressure ...
I1212 00:37:05.903993 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1212 00:37:06.226936 104530 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I1212 00:37:06.226983 104530 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I1212 00:37:06.227046 104530 kubeadm.go:772] waiting for restarted kubelet to initialise ...
I1212 00:37:06.227181 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
I1212 00:37:06.227195 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.227207 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.227216 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.231116 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:06.231139 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.231148 104530 round_trippers.go:580] Audit-Id: 69442a0f-0400-4b49-b627-328626316be1
I1212 00:37:06.231157 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.231166 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.231175 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.231194 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.231203 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.231655 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1202"},"items":[{"metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1175","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29766 chars]
I1212 00:37:06.233034 104530 kubeadm.go:787] kubelet initialised
I1212 00:37:06.233057 104530 kubeadm.go:788] duration metric: took 5.989168ms waiting for restarted kubelet to initialise ...
I1212 00:37:06.233070 104530 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:37:06.233145 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:06.233158 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.233168 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.233176 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.237466 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:06.237487 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.237497 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.237506 104530 round_trippers.go:580] Audit-Id: 39c8852d-e60c-4370-870d-ec951e0b6883
I1212 00:37:06.237515 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.237528 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.237540 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.237548 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.238857 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1202"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
I1212 00:37:06.242660 104530 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.242743 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:06.242753 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.242767 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.242780 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.245902 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:06.245916 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.245922 104530 round_trippers.go:580] Audit-Id: 992a9c9e-aaec-49ae-b76c-09a84a7382e6
I1212 00:37:06.245937 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.245952 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.245967 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.245974 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.245983 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.246223 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:06.246613 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:06.246627 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.246633 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.246640 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.248752 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:06.248771 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.248780 104530 round_trippers.go:580] Audit-Id: e035e5e3-4a98-439c-b13b-fca81955f3e3
I1212 00:37:06.248788 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.248796 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.248805 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.248820 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.248828 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.249002 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:06.249315 104530 pod_ready.go:97] node "multinode-859606" hosting pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.249335 104530 pod_ready.go:81] duration metric: took 6.646085ms waiting for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
E1212 00:37:06.249343 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.249367 104530 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.249423 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-859606
I1212 00:37:06.249431 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.249441 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.249459 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.251411 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:06.251431 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.251445 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.251453 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.251462 104530 round_trippers.go:580] Audit-Id: 78646abe-5066-4ba6-8d95-ec6fa44a1ab7
I1212 00:37:06.251469 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.251476 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.251486 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.251707 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1175","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6296 chars]
I1212 00:37:06.252098 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:06.252112 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.252121 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.252127 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.254083 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:06.254103 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.254111 104530 round_trippers.go:580] Audit-Id: 55b0d2ca-975d-4309-84a7-7cb9b1d8e361
I1212 00:37:06.254120 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.254128 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.254136 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.254144 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.254152 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.254323 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:06.254602 104530 pod_ready.go:97] node "multinode-859606" hosting pod "etcd-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.254619 104530 pod_ready.go:81] duration metric: took 5.239063ms waiting for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
E1212 00:37:06.254626 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "etcd-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.254639 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.254698 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-859606
I1212 00:37:06.254708 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.254715 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.254727 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.256930 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:06.256949 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.256958 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.256967 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.256974 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.256983 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.256991 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.257005 104530 round_trippers.go:580] Audit-Id: aa63f562-c9c3-453f-92e9-d6a4c4b3232f
I1212 00:37:06.257170 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-859606","namespace":"kube-system","uid":"0060efa7-dc06-439e-878f-b93b0e016326","resourceVersion":"1177","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.40:8443","kubernetes.io/config.hash":"6579d881f0553848179768317ac84853","kubernetes.io/config.mirror":"6579d881f0553848179768317ac84853","kubernetes.io/config.seen":"2023-12-12T00:29:55.207817853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7851 chars]
I1212 00:37:06.257538 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:06.257552 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.257558 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.257564 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.259425 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:06.259445 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.259455 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.259463 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.259471 104530 round_trippers.go:580] Audit-Id: 6b47a0d5-4136-488c-882b-b7fdd50344ce
I1212 00:37:06.259479 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.259487 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.259495 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.259782 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:06.260081 104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-apiserver-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.260097 104530 pod_ready.go:81] duration metric: took 5.449955ms waiting for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
E1212 00:37:06.260103 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-apiserver-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.260113 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.260178 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:06.260188 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.260196 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.260209 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.262963 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:06.262979 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.262988 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.262996 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.263012 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.263024 104530 round_trippers.go:580] Audit-Id: eb54b9e3-39c5-4e0b-975b-d574f9443f33
I1212 00:37:06.263034 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.263051 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.263697 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:06.289336 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:06.289371 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.289380 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.289385 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.292233 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:06.292251 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.292257 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.292263 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.292268 104530 round_trippers.go:580] Audit-Id: 436076e3-8b39-45e2-80a6-f8f174ee0ea6
I1212 00:37:06.292273 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.292280 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.292288 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.292641 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:06.293036 104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-controller-manager-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.293058 104530 pod_ready.go:81] duration metric: took 32.933264ms waiting for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
E1212 00:37:06.293071 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-controller-manager-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:06.293082 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.489501 104530 request.go:629] Waited for 196.342403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
I1212 00:37:06.489581 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
I1212 00:37:06.489586 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.489598 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.489608 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.493034 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:06.493071 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.493081 104530 round_trippers.go:580] Audit-Id: 0957bc6a-2f51-41b9-a929-11d0c801edd6
I1212 00:37:06.493089 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.493098 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.493113 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.493126 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.493134 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.493829 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6f6zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5931621-47fd-4f1a-bf46-813dd8352f00","resourceVersion":"1087","creationTimestamp":"2023-12-12T00:32:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:32:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I1212 00:37:06.688623 104530 request.go:629] Waited for 194.307311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
I1212 00:37:06.688686 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
I1212 00:37:06.688690 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.688698 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.688704 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.691344 104530 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1212 00:37:06.691361 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.691368 104530 round_trippers.go:580] Audit-Id: 5d88fdfd-6f2f-44b1-a736-b6120a7e5a78
I1212 00:37:06.691373 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.691390 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.691397 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.691405 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.691413 104530 round_trippers.go:580] Content-Length: 210
I1212 00:37:06.691425 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.691448 104530 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-859606-m03\" not found","reason":"NotFound","details":{"name":"multinode-859606-m03","kind":"nodes"},"code":404}
I1212 00:37:06.691655 104530 pod_ready.go:97] node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
I1212 00:37:06.691677 104530 pod_ready.go:81] duration metric: took 398.587524ms waiting for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
E1212 00:37:06.691686 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
I1212 00:37:06.691693 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
I1212 00:37:06.889174 104530 request.go:629] Waited for 197.369164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
I1212 00:37:06.889252 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
I1212 00:37:06.889259 104530 round_trippers.go:469] Request Headers:
I1212 00:37:06.889271 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:06.889280 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:06.893029 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:06.893047 104530 round_trippers.go:577] Response Headers:
I1212 00:37:06.893054 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:06 GMT
I1212 00:37:06.893093 104530 round_trippers.go:580] Audit-Id: 6846aa1b-42ae-4d5d-a1c7-384d5728840b
I1212 00:37:06.893108 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:06.893115 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:06.893120 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:06.893128 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:06.893282 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-prf7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"8238226c-3d01-4b91-963b-7360206b8615","resourceVersion":"1182","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5929 chars]
I1212 00:37:07.089197 104530 request.go:629] Waited for 195.360283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:07.089292 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:07.089298 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.089316 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.089322 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.091891 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:07.091927 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.091939 104530 round_trippers.go:580] Audit-Id: 1d65f568-2c4a-42d4-bbba-8be4bdc48dd6
I1212 00:37:07.091948 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.091961 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.091970 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.091979 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.091990 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.092224 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:07.092619 104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-proxy-prf7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:07.092640 104530 pod_ready.go:81] duration metric: took 400.940457ms waiting for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
E1212 00:37:07.092649 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-proxy-prf7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:07.092655 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
I1212 00:37:07.289085 104530 request.go:629] Waited for 196.361677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
I1212 00:37:07.289150 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
I1212 00:37:07.289155 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.289165 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.289173 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.292103 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:07.292128 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.292139 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.292147 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.292160 104530 round_trippers.go:580] Audit-Id: 4abc3eb7-8c82-4d87-b6ea-4f96f5e08936
I1212 00:37:07.292172 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.292182 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.292187 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.292410 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q9h26","generateName":"kube-proxy-","namespace":"kube-system","uid":"7dd12033-bf81-4cd3-a412-3fe3211dc87b","resourceVersion":"978","creationTimestamp":"2023-12-12T00:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I1212 00:37:07.489267 104530 request.go:629] Waited for 196.338554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
I1212 00:37:07.489349 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
I1212 00:37:07.489362 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.489373 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.489380 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.491859 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:07.491887 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.491897 104530 round_trippers.go:580] Audit-Id: a3f5d27d-a101-460d-9f23-04a20e185c6f
I1212 00:37:07.491907 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.491930 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.491943 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.491952 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.491959 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.492124 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606-m02","uid":"4dead465-c032-4274-8147-a5a7d38c1bf5","resourceVersion":"1083","creationTimestamp":"2023-12-12T00:34:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_35_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:34:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
I1212 00:37:07.492453 104530 pod_ready.go:92] pod "kube-proxy-q9h26" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:07.492469 104530 pod_ready.go:81] duration metric: took 399.80822ms waiting for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
I1212 00:37:07.492483 104530 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:07.688932 104530 request.go:629] Waited for 196.377404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
I1212 00:37:07.689024 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
I1212 00:37:07.689047 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.689062 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.689086 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.692055 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:07.692076 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.692083 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.692088 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.692094 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.692101 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.692109 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.692118 104530 round_trippers.go:580] Audit-Id: 8c31c43b-819b-4283-9d9f-35f04a7e36e9
I1212 00:37:07.692273 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-859606","namespace":"kube-system","uid":"19a4264c-6ba5-44f4-8419-6f04d6224c92","resourceVersion":"1173","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.mirror":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.seen":"2023-12-12T00:29:55.207819594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
I1212 00:37:07.889054 104530 request.go:629] Waited for 196.353748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:07.889117 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:07.889125 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.889137 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.889151 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.892167 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:07.892188 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.892194 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.892200 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.892226 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.892241 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.892250 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.892257 104530 round_trippers.go:580] Audit-Id: 9ee0618c-b043-4e2b-9e76-9d15b5ac7dc7
I1212 00:37:07.892403 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:07.892746 104530 pod_ready.go:97] node "multinode-859606" hosting pod "kube-scheduler-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:07.892773 104530 pod_ready.go:81] duration metric: took 400.280036ms waiting for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
E1212 00:37:07.892785 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606" hosting pod "kube-scheduler-multinode-859606" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-859606" has status "Ready":"False"
I1212 00:37:07.892824 104530 pod_ready.go:38] duration metric: took 1.659742815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:37:07.892857 104530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1212 00:37:07.904430 104530 command_runner.go:130] > -16
I1212 00:37:07.904886 104530 ops.go:34] apiserver oom_adj: -16
I1212 00:37:07.904899 104530 kubeadm.go:640] restartCluster took 22.709280238s
I1212 00:37:07.904906 104530 kubeadm.go:406] StartCluster complete in 22.738318179s
I1212 00:37:07.904921 104530 settings.go:142] acquiring lock: {Name:mk78e6f78084358f8434def169cefe6a62407a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:37:07.904985 104530 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17764-80294/kubeconfig
I1212 00:37:07.905654 104530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17764-80294/kubeconfig: {Name:mkf7cdfdedbee22114abcb4b16af22e84438f3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:37:07.905860 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1212 00:37:07.906001 104530 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
I1212 00:37:07.909257 104530 out.go:177] * Enabled addons:
I1212 00:37:07.906240 104530 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17764-80294/kubeconfig
I1212 00:37:07.906246 104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:37:07.910860 104530 addons.go:502] enable addons completed in 4.865147ms: enabled=[]
I1212 00:37:07.911128 104530 kapi.go:59] client config for multinode-859606: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.crt", KeyFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/client.key", CAFile:"/home/jenkins/minikube-integration/17764-80294/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1212 00:37:07.911447 104530 round_trippers.go:463] GET https://192.168.39.40:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I1212 00:37:07.911463 104530 round_trippers.go:469] Request Headers:
I1212 00:37:07.911471 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:07.911477 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:07.914264 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:07.914281 104530 round_trippers.go:577] Response Headers:
I1212 00:37:07.914291 104530 round_trippers.go:580] Audit-Id: 48f5a121-1933-4a22-a355-5496f01879d3
I1212 00:37:07.914299 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:07.914306 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:07.914317 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:07.914324 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:07.914335 104530 round_trippers.go:580] Content-Length: 292
I1212 00:37:07.914346 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:07 GMT
I1212 00:37:07.914379 104530 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"75766566-fdf3-4c8a-abaa-ce458e02b129","resourceVersion":"1201","creationTimestamp":"2023-12-12T00:30:03Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I1212 00:37:07.914516 104530 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-859606" context rescaled to 1 replicas
I1212 00:37:07.914548 104530 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
I1212 00:37:07.917208 104530 out.go:177] * Verifying Kubernetes components...
I1212 00:37:07.918721 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1212 00:37:08.110540 104530 command_runner.go:130] > apiVersion: v1
I1212 00:37:08.110578 104530 command_runner.go:130] > data:
I1212 00:37:08.110585 104530 command_runner.go:130] > Corefile: |
I1212 00:37:08.110591 104530 command_runner.go:130] > .:53 {
I1212 00:37:08.110596 104530 command_runner.go:130] > log
I1212 00:37:08.110602 104530 command_runner.go:130] > errors
I1212 00:37:08.110608 104530 command_runner.go:130] > health {
I1212 00:37:08.110614 104530 command_runner.go:130] > lameduck 5s
I1212 00:37:08.110620 104530 command_runner.go:130] > }
I1212 00:37:08.110627 104530 command_runner.go:130] > ready
I1212 00:37:08.110636 104530 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I1212 00:37:08.110647 104530 command_runner.go:130] > pods insecure
I1212 00:37:08.110655 104530 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I1212 00:37:08.110667 104530 command_runner.go:130] > ttl 30
I1212 00:37:08.110673 104530 command_runner.go:130] > }
I1212 00:37:08.110683 104530 command_runner.go:130] > prometheus :9153
I1212 00:37:08.110693 104530 command_runner.go:130] > hosts {
I1212 00:37:08.110705 104530 command_runner.go:130] > 192.168.39.1 host.minikube.internal
I1212 00:37:08.110714 104530 command_runner.go:130] > fallthrough
I1212 00:37:08.110724 104530 command_runner.go:130] > }
I1212 00:37:08.110732 104530 command_runner.go:130] > forward . /etc/resolv.conf {
I1212 00:37:08.110737 104530 command_runner.go:130] > max_concurrent 1000
I1212 00:37:08.110743 104530 command_runner.go:130] > }
I1212 00:37:08.110748 104530 command_runner.go:130] > cache 30
I1212 00:37:08.110755 104530 command_runner.go:130] > loop
I1212 00:37:08.110761 104530 command_runner.go:130] > reload
I1212 00:37:08.110765 104530 command_runner.go:130] > loadbalance
I1212 00:37:08.110771 104530 command_runner.go:130] > }
I1212 00:37:08.110776 104530 command_runner.go:130] > kind: ConfigMap
I1212 00:37:08.110782 104530 command_runner.go:130] > metadata:
I1212 00:37:08.110787 104530 command_runner.go:130] > creationTimestamp: "2023-12-12T00:30:03Z"
I1212 00:37:08.110793 104530 command_runner.go:130] > name: coredns
I1212 00:37:08.110797 104530 command_runner.go:130] > namespace: kube-system
I1212 00:37:08.110804 104530 command_runner.go:130] > resourceVersion: "407"
I1212 00:37:08.110808 104530 command_runner.go:130] > uid: 58df000b-e223-4f9f-a0ce-e6a345bc8b1e
I1212 00:37:08.110871 104530 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
I1212 00:37:08.110910 104530 node_ready.go:35] waiting up to 6m0s for node "multinode-859606" to be "Ready" ...
I1212 00:37:08.111108 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:08.111132 104530 round_trippers.go:469] Request Headers:
I1212 00:37:08.111144 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:08.111155 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:08.115592 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:08.115608 104530 round_trippers.go:577] Response Headers:
I1212 00:37:08.115615 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:08.115620 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:08 GMT
I1212 00:37:08.115625 104530 round_trippers.go:580] Audit-Id: 78e22458-8a23-48e3-9e27-578febb59a20
I1212 00:37:08.115630 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:08.115635 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:08.115640 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:08.116255 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:08.289077 104530 request.go:629] Waited for 172.38964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:08.289150 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:08.289155 104530 round_trippers.go:469] Request Headers:
I1212 00:37:08.289163 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:08.289178 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:08.291767 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:08.291787 104530 round_trippers.go:577] Response Headers:
I1212 00:37:08.291797 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:08.291806 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:08 GMT
I1212 00:37:08.291817 104530 round_trippers.go:580] Audit-Id: bd808d02-17db-44e3-ae16-8f55b7323fe8
I1212 00:37:08.291829 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:08.291841 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:08.291852 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:08.292123 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:08.793301 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:08.793331 104530 round_trippers.go:469] Request Headers:
I1212 00:37:08.793340 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:08.793346 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:08.796482 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:08.796514 104530 round_trippers.go:577] Response Headers:
I1212 00:37:08.796525 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:08.796533 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:08.796539 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:08.796544 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:08 GMT
I1212 00:37:08.796549 104530 round_trippers.go:580] Audit-Id: f551640f-6397-4f2f-ad7b-75e7a1ad4ab4
I1212 00:37:08.796554 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:08.796722 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:09.293409 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:09.293442 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.293453 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.293461 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.296451 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:09.296469 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.296477 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.296482 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.296487 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.296496 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.296519 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.296527 104530 round_trippers.go:580] Audit-Id: 2a8eef1a-1ec0-43cd-aba1-3dcd1603fa87
I1212 00:37:09.296803 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1149","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5284 chars]
I1212 00:37:09.793597 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:09.793626 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.793645 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.793664 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.796604 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:09.796624 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.796631 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.796636 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.796644 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.796649 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.796654 104530 round_trippers.go:580] Audit-Id: 022e877a-18b3-43f9-ab6d-dff649dfc9f8
I1212 00:37:09.796659 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.796949 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:09.797279 104530 node_ready.go:49] node "multinode-859606" has status "Ready":"True"
I1212 00:37:09.797303 104530 node_ready.go:38] duration metric: took 1.686360286s waiting for node "multinode-859606" to be "Ready" ...
I1212 00:37:09.797315 104530 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:37:09.797375 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:09.797386 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.797396 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.797406 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.801844 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:09.801867 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.801876 104530 round_trippers.go:580] Audit-Id: 420ea970-9f48-457c-b0f7-7ec9ec1a588e
I1212 00:37:09.801885 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.801894 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.801904 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.801927 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.801938 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.803506 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1216"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83879 chars]
I1212 00:37:09.806061 104530 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
I1212 00:37:09.806150 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:09.806162 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.806174 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.806184 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.808345 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:09.808361 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.808374 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.808383 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.808397 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.808405 104530 round_trippers.go:580] Audit-Id: 9a9463c1-b358-492e-b922-367c6104207c
I1212 00:37:09.808413 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.808422 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.808706 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:09.809215 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:09.809231 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.809238 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.809244 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.811292 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:09.811307 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.811316 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.811323 104530 round_trippers.go:580] Audit-Id: f5ebccd1-dc5e-4d64-b27a-f59d7a10b2c3
I1212 00:37:09.811331 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.811346 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.811359 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.811367 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.811572 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:09.812037 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:09.812052 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.812059 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.812065 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.813996 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:09.814010 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.814019 104530 round_trippers.go:580] Audit-Id: e587521b-4190-4251-9713-9fe4cfdc8df1
I1212 00:37:09.814027 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.814034 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.814043 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.814054 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.814063 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.814382 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:09.889078 104530 request.go:629] Waited for 74.284522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:09.889133 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:09.889139 104530 round_trippers.go:469] Request Headers:
I1212 00:37:09.889148 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:09.889154 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:09.892171 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:09.892194 104530 round_trippers.go:577] Response Headers:
I1212 00:37:09.892203 104530 round_trippers.go:580] Audit-Id: 6c0b5759-dcf0-429c-88bf-c342959f386c
I1212 00:37:09.892229 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:09.892241 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:09.892250 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:09.892269 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:09.892283 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:09 GMT
I1212 00:37:09.892510 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:10.393716 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:10.393745 104530 round_trippers.go:469] Request Headers:
I1212 00:37:10.393755 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:10.393763 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:10.396859 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:10.396889 104530 round_trippers.go:577] Response Headers:
I1212 00:37:10.396899 104530 round_trippers.go:580] Audit-Id: 5e8103b3-ec4e-4213-995d-24c751476571
I1212 00:37:10.396907 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:10.396915 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:10.396923 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:10.396931 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:10.396939 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:10 GMT
I1212 00:37:10.397178 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:10.397682 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:10.397698 104530 round_trippers.go:469] Request Headers:
I1212 00:37:10.397713 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:10.397722 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:10.399962 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:10.399981 104530 round_trippers.go:577] Response Headers:
I1212 00:37:10.399991 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:10 GMT
I1212 00:37:10.399999 104530 round_trippers.go:580] Audit-Id: 63def391-cbb3-428c-8bda-86f13b98f5c0
I1212 00:37:10.400014 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:10.400026 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:10.400035 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:10.400046 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:10.400207 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:10.894000 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:10.894037 104530 round_trippers.go:469] Request Headers:
I1212 00:37:10.894048 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:10.894057 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:10.899308 104530 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I1212 00:37:10.899334 104530 round_trippers.go:577] Response Headers:
I1212 00:37:10.899344 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:10.899355 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:10.899362 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:10.899369 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:10 GMT
I1212 00:37:10.899377 104530 round_trippers.go:580] Audit-Id: a6f54ff0-c318-428c-9e20-5afa1d44815f
I1212 00:37:10.899383 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:10.899671 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:10.900196 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:10.900212 104530 round_trippers.go:469] Request Headers:
I1212 00:37:10.900219 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:10.900225 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:10.902531 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:10.902550 104530 round_trippers.go:577] Response Headers:
I1212 00:37:10.902560 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:10.902568 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:10.902576 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:10 GMT
I1212 00:37:10.902586 104530 round_trippers.go:580] Audit-Id: 72a3507b-3092-4d9e-bfa5-e84c0a5f5811
I1212 00:37:10.902599 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:10.902610 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:10.902856 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:11.393521 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:11.393559 104530 round_trippers.go:469] Request Headers:
I1212 00:37:11.393569 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:11.393583 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:11.397962 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:11.398001 104530 round_trippers.go:577] Response Headers:
I1212 00:37:11.398012 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:11.398020 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:11.398028 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:11.398036 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:11 GMT
I1212 00:37:11.398048 104530 round_trippers.go:580] Audit-Id: 36163564-e6ac-4456-b495-9930bf8c7c95
I1212 00:37:11.398056 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:11.399514 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:11.400077 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:11.400105 104530 round_trippers.go:469] Request Headers:
I1212 00:37:11.400115 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:11.400129 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:11.402841 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:11.402874 104530 round_trippers.go:577] Response Headers:
I1212 00:37:11.402895 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:11 GMT
I1212 00:37:11.402903 104530 round_trippers.go:580] Audit-Id: cf888e1f-3585-4d4c-b47a-d65c1b673f60
I1212 00:37:11.402913 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:11.402923 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:11.402936 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:11.402944 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:11.403152 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:11.893890 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:11.893921 104530 round_trippers.go:469] Request Headers:
I1212 00:37:11.893930 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:11.893936 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:11.896885 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:11.896910 104530 round_trippers.go:577] Response Headers:
I1212 00:37:11.896920 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:11.896927 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:11.896934 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:11.896942 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:11.896949 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:11 GMT
I1212 00:37:11.896956 104530 round_trippers.go:580] Audit-Id: 560ccbf4-a93e-418b-97ef-b02d5b4a7c2a
I1212 00:37:11.897291 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:11.897761 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:11.897778 104530 round_trippers.go:469] Request Headers:
I1212 00:37:11.897785 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:11.897791 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:11.900338 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:11.900381 104530 round_trippers.go:577] Response Headers:
I1212 00:37:11.900391 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:11 GMT
I1212 00:37:11.900400 104530 round_trippers.go:580] Audit-Id: 57fad163-7798-4518-b48a-afffca40ee66
I1212 00:37:11.900408 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:11.900416 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:11.900428 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:11.900438 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:11.900617 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:11.900907 104530 pod_ready.go:102] pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace has status "Ready":"False"
I1212 00:37:12.393289 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:12.393323 104530 round_trippers.go:469] Request Headers:
I1212 00:37:12.393337 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:12.393346 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:12.397658 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:12.397679 104530 round_trippers.go:577] Response Headers:
I1212 00:37:12.397686 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:12.397691 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:12 GMT
I1212 00:37:12.397697 104530 round_trippers.go:580] Audit-Id: 97d200a8-1144-4cfb-b7e7-ae622c67a09e
I1212 00:37:12.397702 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:12.397707 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:12.397712 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:12.398001 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:12.398453 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:12.398468 104530 round_trippers.go:469] Request Headers:
I1212 00:37:12.398475 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:12.398480 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:12.401097 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:12.401115 104530 round_trippers.go:577] Response Headers:
I1212 00:37:12.401122 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:12 GMT
I1212 00:37:12.401127 104530 round_trippers.go:580] Audit-Id: 2a27c4e6-1e77-48fe-b9ff-18537a1ba771
I1212 00:37:12.401135 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:12.401145 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:12.401153 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:12.401168 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:12.401283 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:12.893943 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:12.893969 104530 round_trippers.go:469] Request Headers:
I1212 00:37:12.893977 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:12.893984 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:12.897025 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:12.897047 104530 round_trippers.go:577] Response Headers:
I1212 00:37:12.897057 104530 round_trippers.go:580] Audit-Id: 551ec886-a3c8-4be6-946b-459f81574f91
I1212 00:37:12.897064 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:12.897071 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:12.897082 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:12.897091 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:12.897103 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:12 GMT
I1212 00:37:12.897283 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:12.898253 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:12.898328 104530 round_trippers.go:469] Request Headers:
I1212 00:37:12.898343 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:12.898352 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:12.902125 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:12.902151 104530 round_trippers.go:577] Response Headers:
I1212 00:37:12.902161 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:12 GMT
I1212 00:37:12.902171 104530 round_trippers.go:580] Audit-Id: bb98bd7a-c04d-437d-aef6-72f5de2e6aac
I1212 00:37:12.902182 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:12.902196 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:12.902214 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:12.902227 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:12.902594 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:13.393264 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:13.393294 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.393307 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.393317 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.396512 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:13.396534 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.396541 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.396546 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.396552 104530 round_trippers.go:580] Audit-Id: 7f6212d1-aaf4-45df-a3b0-bb989bb1227a
I1212 00:37:13.396560 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.396569 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.396578 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.396776 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1172","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
I1212 00:37:13.397248 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:13.397262 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.397270 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.397275 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.399404 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:13.399423 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.399433 104530 round_trippers.go:580] Audit-Id: 77e44ea3-4125-4d4b-9450-f85475c1539a
I1212 00:37:13.399440 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.399447 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.399454 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.399464 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.399471 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.399656 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:13.893292 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-t9jz8
I1212 00:37:13.893317 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.893325 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.893331 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.896458 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:13.896475 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.896487 104530 round_trippers.go:580] Audit-Id: ac46caca-dc3e-4d98-bda6-e430bb1fa8ae
I1212 00:37:13.896494 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.896512 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.896519 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.896526 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.896534 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.897107 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
I1212 00:37:13.897587 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:13.897603 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.897613 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.897621 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.900547 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:13.900568 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.900578 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.900586 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.900595 104530 round_trippers.go:580] Audit-Id: e3dbde9a-cc4a-4762-867f-d9e9a410aef1
I1212 00:37:13.900603 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.900611 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.900643 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.900901 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:13.901209 104530 pod_ready.go:92] pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:13.901226 104530 pod_ready.go:81] duration metric: took 4.09514334s waiting for pod "coredns-5dd5756b68-t9jz8" in "kube-system" namespace to be "Ready" ...
I1212 00:37:13.901265 104530 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:13.901326 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-859606
I1212 00:37:13.901336 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.901346 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.901356 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.903529 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:13.903549 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.903558 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.903566 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.903574 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.903582 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.903590 104530 round_trippers.go:580] Audit-Id: d34bc26a-3f02-4be9-9af2-1ad0fadfbfa3
I1212 00:37:13.903596 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.903967 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-859606","namespace":"kube-system","uid":"7d6ae370-b910-4aef-8729-e141b307ae17","resourceVersion":"1218","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.40:2379","kubernetes.io/config.hash":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.mirror":"3caa97c2c89fd490e8012711c8c24bd3","kubernetes.io/config.seen":"2023-12-12T00:30:03.645880014Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6072 chars]
I1212 00:37:13.904430 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:13.904447 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.904454 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.904460 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.906383 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:13.906404 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.906413 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.906420 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.906429 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.906444 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.906453 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.906466 104530 round_trippers.go:580] Audit-Id: 3f37632a-0e9f-4887-b36f-43d17d2e4134
I1212 00:37:13.906620 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:13.906989 104530 pod_ready.go:92] pod "etcd-multinode-859606" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:13.907016 104530 pod_ready.go:81] duration metric: took 5.741099ms waiting for pod "etcd-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:13.907041 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:13.907100 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-859606
I1212 00:37:13.907110 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.907118 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.907125 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.909221 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:13.909237 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.909245 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.909253 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.909260 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.909267 104530 round_trippers.go:580] Audit-Id: 10369159-e62c-4dd4-8d77-2e82a59d784d
I1212 00:37:13.909275 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.909287 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.909569 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-859606","namespace":"kube-system","uid":"0060efa7-dc06-439e-878f-b93b0e016326","resourceVersion":"1216","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.40:8443","kubernetes.io/config.hash":"6579d881f0553848179768317ac84853","kubernetes.io/config.mirror":"6579d881f0553848179768317ac84853","kubernetes.io/config.seen":"2023-12-12T00:29:55.207817853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7607 chars]
I1212 00:37:13.909929 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:13.909943 104530 round_trippers.go:469] Request Headers:
I1212 00:37:13.909953 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:13.909961 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:13.911781 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:13.911800 104530 round_trippers.go:577] Response Headers:
I1212 00:37:13.911808 104530 round_trippers.go:580] Audit-Id: c9f36dd0-0f04-4274-9537-6c203e1b93b8
I1212 00:37:13.911817 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:13.911825 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:13.911833 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:13.911841 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:13.911848 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:13 GMT
I1212 00:37:13.912152 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:13.912472 104530 pod_ready.go:92] pod "kube-apiserver-multinode-859606" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:13.912489 104530 pod_ready.go:81] duration metric: took 5.438494ms waiting for pod "kube-apiserver-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:13.912497 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:14.088914 104530 request.go:629] Waited for 176.352891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:14.089000 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:14.089007 104530 round_trippers.go:469] Request Headers:
I1212 00:37:14.089021 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:14.089037 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:14.092809 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:14.092835 104530 round_trippers.go:577] Response Headers:
I1212 00:37:14.092845 104530 round_trippers.go:580] Audit-Id: 2c2f7c55-459e-4d01-a3f2-96b1b6cb8c8b
I1212 00:37:14.092853 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:14.092861 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:14.092869 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:14.092876 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:14.092885 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:14 GMT
I1212 00:37:14.093110 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:14.288948 104530 request.go:629] Waited for 195.377005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:14.289023 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:14.289032 104530 round_trippers.go:469] Request Headers:
I1212 00:37:14.289039 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:14.289053 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:14.291661 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:14.291688 104530 round_trippers.go:577] Response Headers:
I1212 00:37:14.291699 104530 round_trippers.go:580] Audit-Id: 9a8ff279-becc-4981-a5d3-bab45d355f5b
I1212 00:37:14.291709 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:14.291716 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:14.291721 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:14.291729 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:14.291734 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:14 GMT
I1212 00:37:14.291936 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:14.489383 104530 request.go:629] Waited for 197.063929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:14.489461 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:14.489467 104530 round_trippers.go:469] Request Headers:
I1212 00:37:14.489475 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:14.489481 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:14.492357 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:14.492379 104530 round_trippers.go:577] Response Headers:
I1212 00:37:14.492386 104530 round_trippers.go:580] Audit-Id: 12e5b7b5-fd32-4fe6-b1ff-eb7b4430f001
I1212 00:37:14.492392 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:14.492397 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:14.492402 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:14.492407 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:14.492412 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:14 GMT
I1212 00:37:14.492593 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:14.689101 104530 request.go:629] Waited for 196.091909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:14.689191 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:14.689198 104530 round_trippers.go:469] Request Headers:
I1212 00:37:14.689208 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:14.689218 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:14.691837 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:14.691858 104530 round_trippers.go:577] Response Headers:
I1212 00:37:14.691865 104530 round_trippers.go:580] Audit-Id: 46cb3999-d30b-4074-ad3e-89d7533c5936
I1212 00:37:14.691870 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:14.691875 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:14.691880 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:14.691885 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:14.691891 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:14 GMT
I1212 00:37:14.692335 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:15.193200 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:15.193224 104530 round_trippers.go:469] Request Headers:
I1212 00:37:15.193232 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:15.193239 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:15.196981 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:15.197000 104530 round_trippers.go:577] Response Headers:
I1212 00:37:15.197006 104530 round_trippers.go:580] Audit-Id: e9469ca3-765f-4b94-bad8-b62081cb2809
I1212 00:37:15.197012 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:15.197034 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:15.197042 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:15.197049 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:15.197056 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:15 GMT
I1212 00:37:15.197197 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:15.197635 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:15.197650 104530 round_trippers.go:469] Request Headers:
I1212 00:37:15.197657 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:15.197663 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:15.199909 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:15.199943 104530 round_trippers.go:577] Response Headers:
I1212 00:37:15.199952 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:15 GMT
I1212 00:37:15.199959 104530 round_trippers.go:580] Audit-Id: 55872ce3-0e31-4a29-bd8d-2fef53f7f5ad
I1212 00:37:15.199967 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:15.199975 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:15.199983 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:15.199991 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:15.200167 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:15.693002 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:15.693027 104530 round_trippers.go:469] Request Headers:
I1212 00:37:15.693035 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:15.693041 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:15.695104 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:15.695127 104530 round_trippers.go:577] Response Headers:
I1212 00:37:15.695138 104530 round_trippers.go:580] Audit-Id: e8dafcef-e232-4564-93ec-c99146d453a6
I1212 00:37:15.695144 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:15.695152 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:15.695161 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:15.695170 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:15.695180 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:15 GMT
I1212 00:37:15.695539 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:15.695954 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:15.695966 104530 round_trippers.go:469] Request Headers:
I1212 00:37:15.695974 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:15.695979 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:15.697613 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:15.697631 104530 round_trippers.go:577] Response Headers:
I1212 00:37:15.697640 104530 round_trippers.go:580] Audit-Id: cd894f72-99d1-44a1-ba36-abb33011003a
I1212 00:37:15.697649 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:15.697656 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:15.697661 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:15.697666 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:15.697671 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:15 GMT
I1212 00:37:15.697922 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:16.193670 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:16.193698 104530 round_trippers.go:469] Request Headers:
I1212 00:37:16.193707 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:16.193712 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:16.196864 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:16.196891 104530 round_trippers.go:577] Response Headers:
I1212 00:37:16.196899 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:16.196904 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:16.196909 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:16 GMT
I1212 00:37:16.196920 104530 round_trippers.go:580] Audit-Id: 7e651bce-3845-4b66-8fb2-622327e8d40b
I1212 00:37:16.196928 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:16.196936 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:16.197330 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:16.197766 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:16.197783 104530 round_trippers.go:469] Request Headers:
I1212 00:37:16.197790 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:16.197796 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:16.200198 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:16.200219 104530 round_trippers.go:577] Response Headers:
I1212 00:37:16.200225 104530 round_trippers.go:580] Audit-Id: 9972e939-1cb4-4a78-8c0d-11a91b0625a8
I1212 00:37:16.200230 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:16.200235 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:16.200241 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:16.200249 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:16.200254 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:16 GMT
I1212 00:37:16.200367 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:16.200638 104530 pod_ready.go:102] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"False"
I1212 00:37:16.693040 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:16.693064 104530 round_trippers.go:469] Request Headers:
I1212 00:37:16.693073 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:16.693090 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:16.696324 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:16.696344 104530 round_trippers.go:577] Response Headers:
I1212 00:37:16.696354 104530 round_trippers.go:580] Audit-Id: cfb4110b-a12c-4dd5-bb27-d5b38a9bdf99
I1212 00:37:16.696363 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:16.696371 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:16.696380 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:16.696388 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:16.696393 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:16 GMT
I1212 00:37:16.696757 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:16.697175 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:16.697186 104530 round_trippers.go:469] Request Headers:
I1212 00:37:16.697193 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:16.697199 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:16.699444 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:16.699466 104530 round_trippers.go:577] Response Headers:
I1212 00:37:16.699482 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:16.699489 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:16.699508 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:16.699514 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:16.699519 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:16 GMT
I1212 00:37:16.699524 104530 round_trippers.go:580] Audit-Id: 86f1d394-268f-4773-8a4f-65dfa15966b3
I1212 00:37:16.699786 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:17.193535 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:17.193562 104530 round_trippers.go:469] Request Headers:
I1212 00:37:17.193571 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:17.193577 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:17.197001 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:17.197029 104530 round_trippers.go:577] Response Headers:
I1212 00:37:17.197039 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:17.197048 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:17.197056 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:17.197063 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:17 GMT
I1212 00:37:17.197078 104530 round_trippers.go:580] Audit-Id: 0039bd07-2809-441c-8a08-a005a1fb9474
I1212 00:37:17.197086 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:17.197590 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:17.198195 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:17.198215 104530 round_trippers.go:469] Request Headers:
I1212 00:37:17.198227 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:17.198235 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:17.200561 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:17.200580 104530 round_trippers.go:577] Response Headers:
I1212 00:37:17.200594 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:17.200602 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:17.200608 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:17 GMT
I1212 00:37:17.200615 104530 round_trippers.go:580] Audit-Id: 7ca59026-3641-45f9-af2d-e56b2f15bbf4
I1212 00:37:17.200623 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:17.200631 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:17.200818 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:17.693526 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:17.693559 104530 round_trippers.go:469] Request Headers:
I1212 00:37:17.693573 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:17.693581 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:17.696472 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:17.696503 104530 round_trippers.go:577] Response Headers:
I1212 00:37:17.696515 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:17 GMT
I1212 00:37:17.696522 104530 round_trippers.go:580] Audit-Id: f3b1cbfa-67ea-48ba-a602-3e51e26733e7
I1212 00:37:17.696529 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:17.696537 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:17.696546 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:17.696556 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:17.696733 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:17.697203 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:17.697219 104530 round_trippers.go:469] Request Headers:
I1212 00:37:17.697230 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:17.697237 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:17.699246 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:17.699267 104530 round_trippers.go:577] Response Headers:
I1212 00:37:17.699274 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:17.699279 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:17.699284 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:17.699289 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:17.699303 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:17 GMT
I1212 00:37:17.699311 104530 round_trippers.go:580] Audit-Id: 537e896a-ad01-467d-8765-b18cc048639c
I1212 00:37:17.699750 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:18.193513 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:18.193539 104530 round_trippers.go:469] Request Headers:
I1212 00:37:18.193547 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:18.193553 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:18.196642 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:18.196663 104530 round_trippers.go:577] Response Headers:
I1212 00:37:18.196670 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:18.196675 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:18.196680 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:18.196685 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:18.196690 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:18 GMT
I1212 00:37:18.196695 104530 round_trippers.go:580] Audit-Id: 01c5b2b7-3578-4302-9a5b-dbb75c34b269
I1212 00:37:18.197211 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:18.197615 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:18.197626 104530 round_trippers.go:469] Request Headers:
I1212 00:37:18.197637 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:18.197645 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:18.199967 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:18.199986 104530 round_trippers.go:577] Response Headers:
I1212 00:37:18.199995 104530 round_trippers.go:580] Audit-Id: b956bf4f-9b6c-4de6-87c0-84916a54c9aa
I1212 00:37:18.200004 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:18.200012 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:18.200019 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:18.200027 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:18.200035 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:18 GMT
I1212 00:37:18.200333 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:18.692979 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:18.693006 104530 round_trippers.go:469] Request Headers:
I1212 00:37:18.693014 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:18.693021 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:18.696863 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:18.696888 104530 round_trippers.go:577] Response Headers:
I1212 00:37:18.696895 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:18.696901 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:18 GMT
I1212 00:37:18.696906 104530 round_trippers.go:580] Audit-Id: d5c6e54d-aaea-4bf3-8a70-4dc0b57b264e
I1212 00:37:18.696911 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:18.696916 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:18.696921 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:18.697946 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1178","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
I1212 00:37:18.698353 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:18.698366 104530 round_trippers.go:469] Request Headers:
I1212 00:37:18.698373 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:18.698381 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:18.700609 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:18.700629 104530 round_trippers.go:577] Response Headers:
I1212 00:37:18.700639 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:18.700647 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:18 GMT
I1212 00:37:18.700655 104530 round_trippers.go:580] Audit-Id: 0dde864e-ad38-4768-932a-24947963eeef
I1212 00:37:18.700662 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:18.700669 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:18.700677 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:18.700840 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:18.701109 104530 pod_ready.go:102] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"False"
I1212 00:37:19.193617 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-859606
I1212 00:37:19.193643 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.193652 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.193658 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.197048 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:19.197071 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.197078 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.197083 104530 round_trippers.go:580] Audit-Id: 20502bbb-60e6-48d0-b283-2696575d955f
I1212 00:37:19.197090 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.197095 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.197100 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.197106 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.197298 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-859606","namespace":"kube-system","uid":"901bf3ab-f34d-42c8-b1da-d5431ae0219f","resourceVersion":"1240","creationTimestamp":"2023-12-12T00:30:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.mirror":"72e889ffb6232267cda1128265168aa7","kubernetes.io/config.seen":"2023-12-12T00:30:03.645885674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
I1212 00:37:19.197741 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:19.197753 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.197760 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.197766 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.199854 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.199879 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.199889 104530 round_trippers.go:580] Audit-Id: d3c788eb-c748-41e7-8b78-70c1417d3584
I1212 00:37:19.199898 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.199907 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.199932 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.199946 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.199954 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.200107 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:19.200426 104530 pod_ready.go:92] pod "kube-controller-manager-multinode-859606" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:19.200447 104530 pod_ready.go:81] duration metric: took 5.287942632s waiting for pod "kube-controller-manager-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.200463 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.200518 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6f6zz
I1212 00:37:19.200527 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.200538 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.200547 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.203112 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.203134 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.203143 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.203151 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.203159 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.203168 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.203177 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.203185 104530 round_trippers.go:580] Audit-Id: d4bddcbb-39f6-4c08-83da-2d4523904cda
I1212 00:37:19.203320 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6f6zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5931621-47fd-4f1a-bf46-813dd8352f00","resourceVersion":"1087","creationTimestamp":"2023-12-12T00:32:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:32:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I1212 00:37:19.203874 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m03
I1212 00:37:19.203896 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.203907 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.203928 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.206014 104530 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I1212 00:37:19.206033 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.206049 104530 round_trippers.go:580] Content-Length: 210
I1212 00:37:19.206061 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.206068 104530 round_trippers.go:580] Audit-Id: 4aef6f8a-43a6-4188-a386-e5e2d3a1f6f3
I1212 00:37:19.206082 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.206089 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.206097 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.206105 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.206236 104530 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-859606-m03\" not found","reason":"NotFound","details":{"name":"multinode-859606-m03","kind":"nodes"},"code":404}
I1212 00:37:19.206386 104530 pod_ready.go:97] node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
I1212 00:37:19.206408 104530 pod_ready.go:81] duration metric: took 5.937337ms waiting for pod "kube-proxy-6f6zz" in "kube-system" namespace to be "Ready" ...
E1212 00:37:19.206423 104530 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-859606-m03" hosting pod "kube-proxy-6f6zz" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-859606-m03": nodes "multinode-859606-m03" not found
I1212 00:37:19.206431 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.206494 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-prf7f
I1212 00:37:19.206504 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.206515 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.206527 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.208365 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:19.208385 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.208394 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.208403 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.208418 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.208426 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.208437 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.208447 104530 round_trippers.go:580] Audit-Id: c0033a2c-2985-4a9c-95d1-b824f5e20713
I1212 00:37:19.208684 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-prf7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"8238226c-3d01-4b91-963b-7360206b8615","resourceVersion":"1206","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
I1212 00:37:19.209132 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:19.209150 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.209164 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.209177 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.210970 104530 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I1212 00:37:19.210988 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.210997 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.211006 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.211020 104530 round_trippers.go:580] Audit-Id: 396956f0-54b8-4778-ab7c-a37fe9b33b2e
I1212 00:37:19.211027 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.211041 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.211052 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.211256 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:19.211606 104530 pod_ready.go:92] pod "kube-proxy-prf7f" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:19.211630 104530 pod_ready.go:81] duration metric: took 5.187099ms waiting for pod "kube-proxy-prf7f" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.211641 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.288985 104530 request.go:629] Waited for 77.268211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
I1212 00:37:19.289047 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q9h26
I1212 00:37:19.289060 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.289074 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.289085 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.291884 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.291923 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.291934 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.291943 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.291954 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.291962 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.291969 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.291984 104530 round_trippers.go:580] Audit-Id: f9222a80-11b7-4070-b9c2-ea9633cc9696
I1212 00:37:19.292162 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q9h26","generateName":"kube-proxy-","namespace":"kube-system","uid":"7dd12033-bf81-4cd3-a412-3fe3211dc87b","resourceVersion":"978","creationTimestamp":"2023-12-12T00:31:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"92d6756f-ef04-4d8e-970a-e73854372aee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:31:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92d6756f-ef04-4d8e-970a-e73854372aee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I1212 00:37:19.489027 104530 request.go:629] Waited for 196.400938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
I1212 00:37:19.489092 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606-m02
I1212 00:37:19.489097 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.489104 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.489111 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.492013 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.492033 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.492040 104530 round_trippers.go:580] Audit-Id: 78f39b63-2309-4f9b-bec7-2fb901d235db
I1212 00:37:19.492045 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.492051 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.492060 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.492069 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.492078 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.492270 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606-m02","uid":"4dead465-c032-4274-8147-a5a7d38c1bf5","resourceVersion":"1083","creationTimestamp":"2023-12-12T00:34:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T00_35_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:34:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
I1212 00:37:19.492641 104530 pod_ready.go:92] pod "kube-proxy-q9h26" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:19.492662 104530 pod_ready.go:81] duration metric: took 281.010934ms waiting for pod "kube-proxy-q9h26" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.492672 104530 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.688873 104530 request.go:629] Waited for 196.137127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
I1212 00:37:19.688950 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-859606
I1212 00:37:19.688955 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.688963 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.688969 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.691734 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.691755 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.691762 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.691767 104530 round_trippers.go:580] Audit-Id: f7675bf4-e31a-4738-b42f-be7859177fe3
I1212 00:37:19.691772 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.691777 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.691783 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.691788 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.692171 104530 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-859606","namespace":"kube-system","uid":"19a4264c-6ba5-44f4-8419-6f04d6224c92","resourceVersion":"1215","creationTimestamp":"2023-12-12T00:30:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.mirror":"bf1fb8b18f1a6f1d2413ac0c0fd0e39c","kubernetes.io/config.seen":"2023-12-12T00:29:55.207819594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
I1212 00:37:19.888908 104530 request.go:629] Waited for 196.296036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:19.888977 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes/multinode-859606
I1212 00:37:19.888982 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.888989 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.888996 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.891677 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:19.891697 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.891704 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.891710 104530 round_trippers.go:580] Audit-Id: 05fc06a3-8feb-45d4-9823-a6b2852345e9
I1212 00:37:19.891723 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.891735 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.891745 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.891754 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.892212 104530 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-12-12T00:29:59Z","fieldsType":"FieldsV1","f [truncated 5157 chars]
I1212 00:37:19.892531 104530 pod_ready.go:92] pod "kube-scheduler-multinode-859606" in "kube-system" namespace has status "Ready":"True"
I1212 00:37:19.892549 104530 pod_ready.go:81] duration metric: took 399.870057ms waiting for pod "kube-scheduler-multinode-859606" in "kube-system" namespace to be "Ready" ...
I1212 00:37:19.892566 104530 pod_ready.go:38] duration metric: took 10.095238343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:37:19.892585 104530 api_server.go:52] waiting for apiserver process to appear ...
I1212 00:37:19.892637 104530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:37:19.905440 104530 command_runner.go:130] > 1800
I1212 00:37:19.905932 104530 api_server.go:72] duration metric: took 11.991353984s to wait for apiserver process to appear ...
I1212 00:37:19.905947 104530 api_server.go:88] waiting for apiserver healthz status ...
I1212 00:37:19.905967 104530 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
I1212 00:37:19.912545 104530 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
ok
I1212 00:37:19.912608 104530 round_trippers.go:463] GET https://192.168.39.40:8443/version
I1212 00:37:19.912620 104530 round_trippers.go:469] Request Headers:
I1212 00:37:19.912630 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:19.912637 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:19.913604 104530 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I1212 00:37:19.913622 104530 round_trippers.go:577] Response Headers:
I1212 00:37:19.913631 104530 round_trippers.go:580] Audit-Id: a90e5deb-2922-43fe-bcfb-bbd1e68986eb
I1212 00:37:19.913640 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:19.913655 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:19.913663 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:19.913674 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:19.913683 104530 round_trippers.go:580] Content-Length: 264
I1212 00:37:19.913691 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:19 GMT
I1212 00:37:19.913714 104530 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.4",
"gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
"gitTreeState": "clean",
"buildDate": "2023-11-15T16:48:54Z",
"goVersion": "go1.20.11",
"compiler": "gc",
"platform": "linux/amd64"
}
I1212 00:37:19.913766 104530 api_server.go:141] control plane version: v1.28.4
I1212 00:37:19.913784 104530 api_server.go:131] duration metric: took 7.830198ms to wait for apiserver health ...
I1212 00:37:19.913794 104530 system_pods.go:43] waiting for kube-system pods to appear ...
I1212 00:37:20.089251 104530 request.go:629] Waited for 175.374729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:20.089344 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:20.089351 104530 round_trippers.go:469] Request Headers:
I1212 00:37:20.089363 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:20.089370 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:20.093974 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:20.094001 104530 round_trippers.go:577] Response Headers:
I1212 00:37:20.094009 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:20.094016 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:20 GMT
I1212 00:37:20.094024 104530 round_trippers.go:580] Audit-Id: a00499e6-5aa6-4108-b030-bb102abafbdd
I1212 00:37:20.094032 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:20.094055 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:20.094065 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:20.095252 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83341 chars]
I1212 00:37:20.098784 104530 system_pods.go:59] 12 kube-system pods found
I1212 00:37:20.098809 104530 system_pods.go:61] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running
I1212 00:37:20.098814 104530 system_pods.go:61] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running
I1212 00:37:20.098820 104530 system_pods.go:61] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
I1212 00:37:20.098826 104530 system_pods.go:61] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
I1212 00:37:20.098832 104530 system_pods.go:61] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running
I1212 00:37:20.098839 104530 system_pods.go:61] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running
I1212 00:37:20.098853 104530 system_pods.go:61] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running
I1212 00:37:20.098864 104530 system_pods.go:61] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
I1212 00:37:20.098870 104530 system_pods.go:61] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running
I1212 00:37:20.098877 104530 system_pods.go:61] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
I1212 00:37:20.098887 104530 system_pods.go:61] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running
I1212 00:37:20.098896 104530 system_pods.go:61] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running
I1212 00:37:20.098906 104530 system_pods.go:74] duration metric: took 185.102197ms to wait for pod list to return data ...
I1212 00:37:20.098917 104530 default_sa.go:34] waiting for default service account to be created ...
I1212 00:37:20.289369 104530 request.go:629] Waited for 190.371344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/default/serviceaccounts
I1212 00:37:20.289426 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/default/serviceaccounts
I1212 00:37:20.289431 104530 round_trippers.go:469] Request Headers:
I1212 00:37:20.289439 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:20.289445 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:20.292334 104530 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I1212 00:37:20.292356 104530 round_trippers.go:577] Response Headers:
I1212 00:37:20.292380 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:20.292392 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:20.292406 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:20.292429 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:20.292440 104530 round_trippers.go:580] Content-Length: 262
I1212 00:37:20.292445 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:20 GMT
I1212 00:37:20.292452 104530 round_trippers.go:580] Audit-Id: fcc27580-a669-4f4d-a44c-e2fc099e94e8
I1212 00:37:20.292478 104530 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b7226be9-2d9e-41aa-a29f-25b2631acf72","resourceVersion":"337","creationTimestamp":"2023-12-12T00:30:16Z"}}]}
I1212 00:37:20.292693 104530 default_sa.go:45] found service account: "default"
I1212 00:37:20.292714 104530 default_sa.go:55] duration metric: took 193.787623ms for default service account to be created ...
I1212 00:37:20.292723 104530 system_pods.go:116] waiting for k8s-apps to be running ...
I1212 00:37:20.489190 104530 request.go:629] Waited for 196.390334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:20.489259 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
I1212 00:37:20.489264 104530 round_trippers.go:469] Request Headers:
I1212 00:37:20.489281 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:20.489299 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:20.493457 104530 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1212 00:37:20.493482 104530 round_trippers.go:577] Response Headers:
I1212 00:37:20.493501 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:20.493511 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:20.493519 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:20.493534 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:20.493541 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:20 GMT
I1212 00:37:20.493545 104530 round_trippers.go:580] Audit-Id: b5e27102-8247-4af2-81d0-d5c782e978b9
I1212 00:37:20.495018 104530 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"coredns-5dd5756b68-t9jz8","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3605a003-e8d6-46b2-8fe7-f45647656622","resourceVersion":"1231","creationTimestamp":"2023-12-12T00:30:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"67f20424-2902-4225-b58a-da1b126c1b61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T00:30:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67f20424-2902-4225-b58a-da1b126c1b61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83341 chars]
I1212 00:37:20.497464 104530 system_pods.go:86] 12 kube-system pods found
I1212 00:37:20.497487 104530 system_pods.go:89] "coredns-5dd5756b68-t9jz8" [3605a003-e8d6-46b2-8fe7-f45647656622] Running
I1212 00:37:20.497492 104530 system_pods.go:89] "etcd-multinode-859606" [7d6ae370-b910-4aef-8729-e141b307ae17] Running
I1212 00:37:20.497498 104530 system_pods.go:89] "kindnet-9slwc" [6b37daf7-e9d5-47c5-ae94-01150282b6cf] Running
I1212 00:37:20.497505 104530 system_pods.go:89] "kindnet-d4q52" [35ed1c56-7487-4b6d-ab1f-b5cfe6502739] Running
I1212 00:37:20.497520 104530 system_pods.go:89] "kindnet-x2g5d" [c1dab004-2557-4b4f-975b-bd0b5a8f4d90] Running
I1212 00:37:20.497528 104530 system_pods.go:89] "kube-apiserver-multinode-859606" [0060efa7-dc06-439e-878f-b93b0e016326] Running
I1212 00:37:20.497543 104530 system_pods.go:89] "kube-controller-manager-multinode-859606" [901bf3ab-f34d-42c8-b1da-d5431ae0219f] Running
I1212 00:37:20.497550 104530 system_pods.go:89] "kube-proxy-6f6zz" [d5931621-47fd-4f1a-bf46-813dd8352f00] Running
I1212 00:37:20.497554 104530 system_pods.go:89] "kube-proxy-prf7f" [8238226c-3d01-4b91-963b-7360206b8615] Running
I1212 00:37:20.497560 104530 system_pods.go:89] "kube-proxy-q9h26" [7dd12033-bf81-4cd3-a412-3fe3211dc87b] Running
I1212 00:37:20.497565 104530 system_pods.go:89] "kube-scheduler-multinode-859606" [19a4264c-6ba5-44f4-8419-6f04d6224c92] Running
I1212 00:37:20.497571 104530 system_pods.go:89] "storage-provisioner" [a021db21-b335-4c05-8e32-808642dbb72e] Running
I1212 00:37:20.497579 104530 system_pods.go:126] duration metric: took 204.845476ms to wait for k8s-apps to be running ...
I1212 00:37:20.497589 104530 system_svc.go:44] waiting for kubelet service to be running ....
I1212 00:37:20.497645 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1212 00:37:20.514001 104530 system_svc.go:56] duration metric: took 16.405003ms WaitForService to wait for kubelet.
I1212 00:37:20.514018 104530 kubeadm.go:581] duration metric: took 12.599444535s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1212 00:37:20.514036 104530 node_conditions.go:102] verifying NodePressure condition ...
I1212 00:37:20.689493 104530 request.go:629] Waited for 175.357994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes
I1212 00:37:20.689560 104530 round_trippers.go:463] GET https://192.168.39.40:8443/api/v1/nodes
I1212 00:37:20.689567 104530 round_trippers.go:469] Request Headers:
I1212 00:37:20.689580 104530 round_trippers.go:473] Accept: application/json, */*
I1212 00:37:20.689590 104530 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I1212 00:37:20.692705 104530 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I1212 00:37:20.692723 104530 round_trippers.go:577] Response Headers:
I1212 00:37:20.692730 104530 round_trippers.go:580] Audit-Id: 1464068b-baf2-48bc-ba66-087651c82097
I1212 00:37:20.692735 104530 round_trippers.go:580] Cache-Control: no-cache, private
I1212 00:37:20.692740 104530 round_trippers.go:580] Content-Type: application/json
I1212 00:37:20.692752 104530 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 903d8542-1472-4deb-b930-12ee3151fe79
I1212 00:37:20.692766 104530 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 044af0ed-7556-452e-b14b-497d7b22ca6a
I1212 00:37:20.692774 104530 round_trippers.go:580] Date: Tue, 12 Dec 2023 00:37:20 GMT
I1212 00:37:20.693088 104530 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1240"},"items":[{"metadata":{"name":"multinode-859606","uid":"5647f505-fa86-4c4a-a2de-cbfaa4ac7b2b","resourceVersion":"1213","creationTimestamp":"2023-12-12T00:29:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-859606","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f155626207ae1ae93e2fd3ceb81b1e734028b5f4","minikube.k8s.io/name":"multinode-859606","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T00_30_04_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10008 chars]
I1212 00:37:20.693685 104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1212 00:37:20.693709 104530 node_conditions.go:123] node cpu capacity is 2
I1212 00:37:20.693723 104530 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I1212 00:37:20.693735 104530 node_conditions.go:123] node cpu capacity is 2
I1212 00:37:20.693741 104530 node_conditions.go:105] duration metric: took 179.70085ms to run NodePressure ...
I1212 00:37:20.693757 104530 start.go:228] waiting for startup goroutines ...
I1212 00:37:20.693768 104530 start.go:233] waiting for cluster config update ...
I1212 00:37:20.693780 104530 start.go:242] writing updated cluster config ...
I1212 00:37:20.694346 104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:37:20.694464 104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
I1212 00:37:20.697216 104530 out.go:177] * Starting worker node multinode-859606-m02 in cluster multinode-859606
I1212 00:37:20.698351 104530 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I1212 00:37:20.698370 104530 cache.go:56] Caching tarball of preloaded images
I1212 00:37:20.698473 104530 preload.go:174] Found /home/jenkins/minikube-integration/17764-80294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1212 00:37:20.698483 104530 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I1212 00:37:20.698567 104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
I1212 00:37:20.698742 104530 start.go:365] acquiring machines lock for multinode-859606-m02: {Name:mk381e91746c2e5b8a4620fe3fd447d80375e413 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1212 00:37:20.698785 104530 start.go:369] acquired machines lock for "multinode-859606-m02" in 25.605µs
I1212 00:37:20.698798 104530 start.go:96] Skipping create...Using existing machine configuration
I1212 00:37:20.698805 104530 fix.go:54] fixHost starting: m02
I1212 00:37:20.699049 104530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1212 00:37:20.699070 104530 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 00:37:20.713769 104530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39383
I1212 00:37:20.714173 104530 main.go:141] libmachine: () Calling .GetVersion
I1212 00:37:20.714616 104530 main.go:141] libmachine: Using API Version 1
I1212 00:37:20.714644 104530 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 00:37:20.714957 104530 main.go:141] libmachine: () Calling .GetMachineName
I1212 00:37:20.715148 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:20.715321 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetState
I1212 00:37:20.716762 104530 fix.go:102] recreateIfNeeded on multinode-859606-m02: state=Stopped err=<nil>
I1212 00:37:20.716788 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
W1212 00:37:20.716969 104530 fix.go:128] unexpected machine state, will restart: <nil>
I1212 00:37:20.718972 104530 out.go:177] * Restarting existing kvm2 VM for "multinode-859606-m02" ...
I1212 00:37:20.720351 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .Start
I1212 00:37:20.720531 104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring networks are active...
I1212 00:37:20.721224 104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring network default is active
I1212 00:37:20.721668 104530 main.go:141] libmachine: (multinode-859606-m02) Ensuring network mk-multinode-859606 is active
I1212 00:37:20.722168 104530 main.go:141] libmachine: (multinode-859606-m02) Getting domain xml...
I1212 00:37:20.722963 104530 main.go:141] libmachine: (multinode-859606-m02) Creating domain...
I1212 00:37:21.957474 104530 main.go:141] libmachine: (multinode-859606-m02) Waiting to get IP...
I1212 00:37:21.958335 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:21.958740 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:21.958796 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:21.958699 104802 retry.go:31] will retry after 282.895442ms: waiting for machine to come up
I1212 00:37:22.243280 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:22.243745 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:22.243773 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.243699 104802 retry.go:31] will retry after 387.587998ms: waiting for machine to come up
I1212 00:37:22.633350 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:22.633841 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:22.633875 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.633770 104802 retry.go:31] will retry after 299.810803ms: waiting for machine to come up
I1212 00:37:22.935179 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:22.935627 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:22.935662 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:22.935567 104802 retry.go:31] will retry after 368.460834ms: waiting for machine to come up
I1212 00:37:23.306050 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:23.306531 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:23.306554 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:23.306486 104802 retry.go:31] will retry after 567.761569ms: waiting for machine to come up
I1212 00:37:23.876187 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:23.876658 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:23.876692 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:23.876603 104802 retry.go:31] will retry after 673.685642ms: waiting for machine to come up
I1212 00:37:24.551471 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:24.551879 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:24.551932 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:24.551825 104802 retry.go:31] will retry after 837.913991ms: waiting for machine to come up
I1212 00:37:25.391781 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:25.392075 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:25.392106 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:25.392038 104802 retry.go:31] will retry after 1.006695939s: waiting for machine to come up
I1212 00:37:26.400658 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:26.401136 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:26.401168 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:26.401063 104802 retry.go:31] will retry after 1.662996951s: waiting for machine to come up
I1212 00:37:28.065937 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:28.066407 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:28.066429 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:28.066363 104802 retry.go:31] will retry after 2.272536479s: waiting for machine to come up
I1212 00:37:30.341875 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:30.342336 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:30.342380 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:30.342274 104802 retry.go:31] will retry after 1.895134507s: waiting for machine to come up
I1212 00:37:32.239315 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:32.239701 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:32.239736 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:32.239637 104802 retry.go:31] will retry after 2.566822425s: waiting for machine to come up
I1212 00:37:34.808939 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:34.809382 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | unable to find current IP address of domain multinode-859606-m02 in network mk-multinode-859606
I1212 00:37:34.809406 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | I1212 00:37:34.809339 104802 retry.go:31] will retry after 4.439419543s: waiting for machine to come up
I1212 00:37:39.249907 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.250290 104530 main.go:141] libmachine: (multinode-859606-m02) Found IP for machine: 192.168.39.65
I1212 00:37:39.250320 104530 main.go:141] libmachine: (multinode-859606-m02) Reserving static IP address...
I1212 00:37:39.250342 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has current primary IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.250818 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "multinode-859606-m02", mac: "52:54:00:ea:e9:13", ip: "192.168.39.65"} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.250858 104530 main.go:141] libmachine: (multinode-859606-m02) Reserved static IP address: 192.168.39.65
I1212 00:37:39.250878 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | skip adding static IP to network mk-multinode-859606 - found existing host DHCP lease matching {name: "multinode-859606-m02", mac: "52:54:00:ea:e9:13", ip: "192.168.39.65"}
I1212 00:37:39.250889 104530 main.go:141] libmachine: (multinode-859606-m02) Waiting for SSH to be available...
I1212 00:37:39.250909 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Getting to WaitForSSH function...
I1212 00:37:39.253228 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.253705 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.253733 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.253879 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Using SSH client type: external
I1212 00:37:39.253906 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa (-rw-------)
I1212 00:37:39.253933 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I1212 00:37:39.253947 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | About to run SSH command:
I1212 00:37:39.253968 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | exit 0
I1212 00:37:39.347723 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | SSH cmd err, output: <nil>:
I1212 00:37:39.348137 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetConfigRaw
I1212 00:37:39.348792 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
I1212 00:37:39.351240 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.351592 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.351628 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.351860 104530 profile.go:148] Saving config to /home/jenkins/minikube-integration/17764-80294/.minikube/profiles/multinode-859606/config.json ...
I1212 00:37:39.352092 104530 machine.go:88] provisioning docker machine ...
I1212 00:37:39.352113 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:39.352303 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
I1212 00:37:39.352445 104530 buildroot.go:166] provisioning hostname "multinode-859606-m02"
I1212 00:37:39.352470 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
I1212 00:37:39.352609 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:39.354957 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.355309 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.355339 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.355537 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:39.355716 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.355867 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.355992 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:39.356149 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:39.356637 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:39.356656 104530 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-859606-m02 && echo "multinode-859606-m02" | sudo tee /etc/hostname
I1212 00:37:39.502532 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-859606-m02
I1212 00:37:39.502568 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:39.505328 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.505789 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.505823 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.505999 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:39.506231 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.506373 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.506531 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:39.506708 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:39.507067 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:39.507085 104530 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-859606-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-859606-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-859606-m02' | sudo tee -a /etc/hosts;
fi
fi
I1212 00:37:39.645009 104530 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1212 00:37:39.645036 104530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17764-80294/.minikube CaCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17764-80294/.minikube}
I1212 00:37:39.645051 104530 buildroot.go:174] setting up certificates
I1212 00:37:39.645059 104530 provision.go:83] configureAuth start
I1212 00:37:39.645068 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetMachineName
I1212 00:37:39.645319 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
I1212 00:37:39.648244 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.648695 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.648726 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.648891 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:39.651280 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.651603 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.651634 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.651775 104530 provision.go:138] copyHostCerts
I1212 00:37:39.651810 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
I1212 00:37:39.651849 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem, removing ...
I1212 00:37:39.651862 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem
I1212 00:37:39.651958 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/ca.pem (1078 bytes)
I1212 00:37:39.652055 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
I1212 00:37:39.652080 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem, removing ...
I1212 00:37:39.652087 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem
I1212 00:37:39.652126 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/cert.pem (1123 bytes)
I1212 00:37:39.652240 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
I1212 00:37:39.652270 104530 exec_runner.go:144] found /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem, removing ...
I1212 00:37:39.652278 104530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem
I1212 00:37:39.652320 104530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17764-80294/.minikube/key.pem (1679 bytes)
I1212 00:37:39.652413 104530 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca-key.pem org=jenkins.multinode-859606-m02 san=[192.168.39.65 192.168.39.65 localhost 127.0.0.1 minikube multinode-859606-m02]
I1212 00:37:39.786080 104530 provision.go:172] copyRemoteCerts
I1212 00:37:39.786162 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1212 00:37:39.786193 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:39.788840 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.789107 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.789147 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.789364 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:39.789559 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.789730 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:39.789868 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
I1212 00:37:39.884832 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1212 00:37:39.884920 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1212 00:37:39.908744 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem -> /etc/docker/server.pem
I1212 00:37:39.908817 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I1212 00:37:39.932380 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1212 00:37:39.932446 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1212 00:37:39.956816 104530 provision.go:86] duration metric: configureAuth took 311.743914ms
I1212 00:37:39.956853 104530 buildroot.go:189] setting minikube options for container-runtime
I1212 00:37:39.957091 104530 config.go:182] Loaded profile config "multinode-859606": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 00:37:39.957118 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:39.957389 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:39.960094 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.960494 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:39.960529 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:39.960669 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:39.960847 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.961048 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:39.961181 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:39.961346 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:39.961722 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:39.961740 104530 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1212 00:37:40.093977 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I1212 00:37:40.094012 104530 buildroot.go:70] root file system type: tmpfs
I1212 00:37:40.094174 104530 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1212 00:37:40.094208 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:40.097149 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:40.097507 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:40.097534 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:40.097760 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:40.098013 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:40.098210 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:40.098318 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:40.098507 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:40.098848 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:40.098916 104530 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.40"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1212 00:37:40.241326 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.40
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1212 00:37:40.241355 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:40.243925 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:40.244271 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:40.244296 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:40.244504 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:40.244693 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:40.244875 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:40.245023 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:40.245173 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:40.245547 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:40.245565 104530 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1212 00:37:41.126250 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1212 00:37:41.126280 104530 machine.go:91] provisioned docker machine in 1.774172725s
I1212 00:37:41.126296 104530 start.go:300] post-start starting for "multinode-859606-m02" (driver="kvm2")
I1212 00:37:41.126310 104530 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1212 00:37:41.126329 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:41.126679 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1212 00:37:41.126707 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:41.129504 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.129833 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:41.129866 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.130073 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:41.130301 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:41.130478 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:41.130687 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
I1212 00:37:41.225898 104530 ssh_runner.go:195] Run: cat /etc/os-release
I1212 00:37:41.230065 104530 command_runner.go:130] > NAME=Buildroot
I1212 00:37:41.230089 104530 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
I1212 00:37:41.230096 104530 command_runner.go:130] > ID=buildroot
I1212 00:37:41.230109 104530 command_runner.go:130] > VERSION_ID=2021.02.12
I1212 00:37:41.230117 104530 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I1212 00:37:41.230251 104530 info.go:137] Remote host: Buildroot 2021.02.12
I1212 00:37:41.230275 104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/addons for local assets ...
I1212 00:37:41.230351 104530 filesync.go:126] Scanning /home/jenkins/minikube-integration/17764-80294/.minikube/files for local assets ...
I1212 00:37:41.230452 104530 filesync.go:149] local asset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> 876092.pem in /etc/ssl/certs
I1212 00:37:41.230466 104530 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem -> /etc/ssl/certs/876092.pem
I1212 00:37:41.230586 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1212 00:37:41.239133 104530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17764-80294/.minikube/files/etc/ssl/certs/876092.pem --> /etc/ssl/certs/876092.pem (1708 bytes)
I1212 00:37:41.262487 104530 start.go:303] post-start completed in 136.174154ms
I1212 00:37:41.262513 104530 fix.go:56] fixHost completed within 20.563707335s
I1212 00:37:41.262539 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:41.265240 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.265538 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:41.265572 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.265778 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:41.265950 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:41.266126 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:41.266310 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:41.266489 104530 main.go:141] libmachine: Using SSH client type: native
I1212 00:37:41.266856 104530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil> [] 0s} 192.168.39.65 22 <nil> <nil>}
I1212 00:37:41.266871 104530 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I1212 00:37:41.396610 104530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702341461.344204788
I1212 00:37:41.396638 104530 fix.go:206] guest clock: 1702341461.344204788
I1212 00:37:41.396649 104530 fix.go:219] Guest: 2023-12-12 00:37:41.344204788 +0000 UTC Remote: 2023-12-12 00:37:41.262521516 +0000 UTC m=+81.745766897 (delta=81.683272ms)
I1212 00:37:41.396669 104530 fix.go:190] guest clock delta is within tolerance: 81.683272ms
I1212 00:37:41.396676 104530 start.go:83] releasing machines lock for "multinode-859606-m02", held for 20.697881438s
I1212 00:37:41.396707 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:41.396998 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetIP
I1212 00:37:41.399794 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.400251 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:41.400284 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.402301 104530 out.go:177] * Found network options:
I1212 00:37:41.403745 104530 out.go:177] - NO_PROXY=192.168.39.40
W1212 00:37:41.404991 104530 proxy.go:119] fail to check proxy env: Error ip not in block
I1212 00:37:41.405014 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:41.405584 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:41.405757 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .DriverName
I1212 00:37:41.405832 104530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1212 00:37:41.405875 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
W1212 00:37:41.405953 104530 proxy.go:119] fail to check proxy env: Error ip not in block
I1212 00:37:41.406034 104530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1212 00:37:41.406061 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHHostname
I1212 00:37:41.408298 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.408470 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.408704 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:41.408734 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.408860 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:41.408890 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:13", ip: ""} in network mk-multinode-859606: {Iface:virbr1 ExpiryTime:2023-12-12 01:37:33 +0000 UTC Type:0 Mac:52:54:00:ea:e9:13 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-859606-m02 Clientid:01:52:54:00:ea:e9:13}
I1212 00:37:41.408931 104530 main.go:141] libmachine: (multinode-859606-m02) DBG | domain multinode-859606-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:ea:e9:13 in network mk-multinode-859606
I1212 00:37:41.409042 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHPort
I1212 00:37:41.409170 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:41.409276 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHKeyPath
I1212 00:37:41.409448 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:41.409487 104530 main.go:141] libmachine: (multinode-859606-m02) Calling .GetSSHUsername
I1212 00:37:41.409614 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
I1212 00:37:41.409611 104530 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17764-80294/.minikube/machines/multinode-859606-m02/id_rsa Username:docker}
I1212 00:37:41.504163 104530 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W1212 00:37:41.504453 104530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1212 00:37:41.504528 104530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1212 00:37:41.528894 104530 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I1212 00:37:41.528955 104530 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I1212 00:37:41.529013 104530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1212 00:37:41.529030 104530 start.go:475] detecting cgroup driver to use...
I1212 00:37:41.529132 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:37:41.549871 104530 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I1212 00:37:41.549952 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I1212 00:37:41.559926 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1212 00:37:41.569604 104530 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1212 00:37:41.569669 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1212 00:37:41.578872 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:37:41.588052 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1212 00:37:41.597753 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:37:41.607940 104530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1212 00:37:41.618063 104530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1212 00:37:41.628111 104530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1212 00:37:41.637202 104530 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I1212 00:37:41.637321 104530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1212 00:37:41.645675 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:37:41.756330 104530 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1212 00:37:41.774116 104530 start.go:475] detecting cgroup driver to use...
I1212 00:37:41.774203 104530 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1212 00:37:41.790254 104530 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I1212 00:37:41.790292 104530 command_runner.go:130] > [Unit]
I1212 00:37:41.790304 104530 command_runner.go:130] > Description=Docker Application Container Engine
I1212 00:37:41.790313 104530 command_runner.go:130] > Documentation=https://docs.docker.com
I1212 00:37:41.790321 104530 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I1212 00:37:41.790329 104530 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I1212 00:37:41.790357 104530 command_runner.go:130] > StartLimitBurst=3
I1212 00:37:41.790372 104530 command_runner.go:130] > StartLimitIntervalSec=60
I1212 00:37:41.790377 104530 command_runner.go:130] > [Service]
I1212 00:37:41.790387 104530 command_runner.go:130] > Type=notify
I1212 00:37:41.790391 104530 command_runner.go:130] > Restart=on-failure
I1212 00:37:41.790396 104530 command_runner.go:130] > Environment=NO_PROXY=192.168.39.40
I1212 00:37:41.790406 104530 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I1212 00:37:41.790421 104530 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I1212 00:37:41.790437 104530 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I1212 00:37:41.790453 104530 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I1212 00:37:41.790463 104530 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I1212 00:37:41.790474 104530 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I1212 00:37:41.790485 104530 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I1212 00:37:41.790548 104530 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I1212 00:37:41.790571 104530 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I1212 00:37:41.790578 104530 command_runner.go:130] > ExecStart=
I1212 00:37:41.790612 104530 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I1212 00:37:41.790624 104530 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I1212 00:37:41.790640 104530 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I1212 00:37:41.790650 104530 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I1212 00:37:41.790654 104530 command_runner.go:130] > LimitNOFILE=infinity
I1212 00:37:41.790662 104530 command_runner.go:130] > LimitNPROC=infinity
I1212 00:37:41.790671 104530 command_runner.go:130] > LimitCORE=infinity
I1212 00:37:41.790681 104530 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I1212 00:37:41.790693 104530 command_runner.go:130] > # Only systemd 226 and above support this version.
I1212 00:37:41.790703 104530 command_runner.go:130] > TasksMax=infinity
I1212 00:37:41.790718 104530 command_runner.go:130] > TimeoutStartSec=0
I1212 00:37:41.790729 104530 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I1212 00:37:41.790740 104530 command_runner.go:130] > Delegate=yes
I1212 00:37:41.790749 104530 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I1212 00:37:41.790764 104530 command_runner.go:130] > KillMode=process
I1212 00:37:41.790774 104530 command_runner.go:130] > [Install]
I1212 00:37:41.790781 104530 command_runner.go:130] > WantedBy=multi-user.target
I1212 00:37:41.790852 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1212 00:37:41.807010 104530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1212 00:37:41.831315 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1212 00:37:41.843702 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:37:41.855452 104530 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1212 00:37:41.887392 104530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:37:41.900115 104530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:37:41.917122 104530 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I1212 00:37:41.917212 104530 ssh_runner.go:195] Run: which cri-dockerd
I1212 00:37:41.920948 104530 command_runner.go:130] > /usr/bin/cri-dockerd
I1212 00:37:41.921049 104530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1212 00:37:41.929638 104530 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1212 00:37:41.945850 104530 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1212 00:37:42.053680 104530 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1212 00:37:42.164852 104530 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I1212 00:37:42.164906 104530 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1212 00:37:42.181956 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:37:42.292269 104530 ssh_runner.go:195] Run: sudo systemctl restart docker
I1212 00:37:43.762922 104530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.47061306s)
I1212 00:37:43.762999 104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1212 00:37:43.866143 104530 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1212 00:37:43.974469 104530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1212 00:37:44.089805 104530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:37:44.189760 104530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1212 00:37:44.203372 104530 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
I1212 00:37:44.203469 104530 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
I1212 00:37:44.213697 104530 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 00:37:32 UTC, ends at Tue 2023-12-12 00:37:44 UTC. --
I1212 00:37:44.213720 104530 command_runner.go:130] > Dec 12 00:37:33 minikube systemd[1]: Starting CRI Docker Socket for the API.
I1212 00:37:44.213727 104530 command_runner.go:130] > Dec 12 00:37:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
I1212 00:37:44.213734 104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: cri-docker.socket: Succeeded.
I1212 00:37:44.213740 104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Closed CRI Docker Socket for the API.
I1212 00:37:44.213747 104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Stopping CRI Docker Socket for the API.
I1212 00:37:44.213755 104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Starting CRI Docker Socket for the API.
I1212 00:37:44.213761 104530 command_runner.go:130] > Dec 12 00:37:36 minikube systemd[1]: Listening on CRI Docker Socket for the API.
I1212 00:37:44.213770 104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: cri-docker.socket: Succeeded.
I1212 00:37:44.213778 104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Closed CRI Docker Socket for the API.
I1212 00:37:44.213786 104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Stopping CRI Docker Socket for the API.
I1212 00:37:44.213794 104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Starting CRI Docker Socket for the API.
I1212 00:37:44.213801 104530 command_runner.go:130] > Dec 12 00:37:38 minikube systemd[1]: Listening on CRI Docker Socket for the API.
I1212 00:37:44.213814 104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
I1212 00:37:44.213828 104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
I1212 00:37:44.213842 104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
I1212 00:37:44.213860 104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Starting CRI Docker Socket for the API.
I1212 00:37:44.213874 104530 command_runner.go:130] > Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Listening on CRI Docker Socket for the API.
I1212 00:37:44.213887 104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
I1212 00:37:44.213899 104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
I1212 00:37:44.213913 104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
I1212 00:37:44.213929 104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
I1212 00:37:44.213946 104530 command_runner.go:130] > Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
I1212 00:37:44.216418 104530 out.go:177]
W1212 00:37:44.218157 104530 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
sudo journalctl --no-pager -u cri-docker.socket:
-- stdout --
-- Journal begins at Tue 2023-12-12 00:37:32 UTC, ends at Tue 2023-12-12 00:37:44 UTC. --
Dec 12 00:37:33 minikube systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:33 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:36 minikube systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:36 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:38 minikube systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:38 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Starting CRI Docker Socket for the API.
Dec 12 00:37:40 multinode-859606-m02 systemd[1]: Listening on CRI Docker Socket for the API.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Succeeded.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Closed CRI Docker Socket for the API.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Stopping CRI Docker Socket for the API.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
Dec 12 00:37:44 multinode-859606-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
-- /stdout --
W1212 00:37:44.218182 104530 out.go:239] *
W1212 00:37:44.219022 104530 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1212 00:37:44.221199 104530 out.go:177]
*
* ==> Docker <==
* -- Journal begins at Tue 2023-12-12 00:36:31 UTC, ends at Tue 2023-12-12 00:37:45 UTC. --
Dec 12 00:37:05 multinode-859606 dockerd[833]: time="2023-12-12T00:37:05.679427372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 12 00:37:05 multinode-859606 dockerd[833]: time="2023-12-12T00:37:05.679658304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 12 00:37:07 multinode-859606 cri-dockerd[1062]: time="2023-12-12T00:37:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a871816c58a42ddd362fd89fa0457159c939b88d434669ab9c87303a2cdce4ea/resolv.conf as [nameserver 192.168.122.1]"
Dec 12 00:37:08 multinode-859606 dockerd[833]: time="2023-12-12T00:37:08.060585294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 12 00:37:08 multinode-859606 dockerd[833]: time="2023-12-12T00:37:08.060634616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 12 00:37:08 multinode-859606 dockerd[833]: time="2023-12-12T00:37:08.060653425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 12 00:37:08 multinode-859606 dockerd[833]: time="2023-12-12T00:37:08.060667094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.820246675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.820364685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.820401455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.820412808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.821705898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.822643071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.822914208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 12 00:37:11 multinode-859606 dockerd[833]: time="2023-12-12T00:37:11.823074692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 12 00:37:12 multinode-859606 cri-dockerd[1062]: time="2023-12-12T00:37:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cf1f78f2e3a90cc24f70123b2504134a6d0123ff6370d1bc64ce6dfdb1255ca3/resolv.conf as [nameserver 192.168.122.1]"
Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.464886651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.464948238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.464974231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.465070138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 12 00:37:12 multinode-859606 cri-dockerd[1062]: time="2023-12-12T00:37:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d54ee5c24673d29c1697cc6ea65d3e7ff3e3a6bd5430a949d8748c099c864ebe/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.761304053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.761428711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.761450336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 12 00:37:12 multinode-859606 dockerd[833]: time="2023-12-12T00:37:12.761504628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
d04545a1f3fee 8c811b4aec35f 33 seconds ago Running busybox 2 d54ee5c24673d busybox-5bc68d56bd-8rtcm
6784eb7676333 ead0a4a53df89 33 seconds ago Running coredns 2 cf1f78f2e3a90 coredns-5dd5756b68-t9jz8
2b07939ba9ef6 c7d1297425461 38 seconds ago Running kindnet-cni 2 a871816c58a42 kindnet-x2g5d
c656da1ebafe8 6e38f40d628db 40 seconds ago Running storage-provisioner 2 a3ad9a474f7aa storage-provisioner
810342f9e6bb9 83f6cc407eed8 41 seconds ago Running kube-proxy 2 d1a15039b58d3 kube-proxy-prf7f
8699415e5935b d058aa5ab969c 46 seconds ago Running kube-controller-manager 2 23f398d4b8027 kube-controller-manager-multinode-859606
1ebf2246a1889 7fe0e6f37db33 46 seconds ago Running kube-apiserver 2 39f0bea97f6f3 kube-apiserver-multinode-859606
acd573d2c57e9 73deb9a3f7025 46 seconds ago Running etcd 2 a7ec9e84f4ed9 etcd-multinode-859606
407d7ddb64227 e3db313c6dbc0 47 seconds ago Running kube-scheduler 2 0aaed96252109 kube-scheduler-multinode-859606
263bfb1fd11f8 8c811b4aec35f 3 minutes ago Exited busybox 1 5d7c24535c7c4 busybox-5bc68d56bd-8rtcm
abde5ad85d4a0 ead0a4a53df89 3 minutes ago Exited coredns 1 6960e84b00b86 coredns-5dd5756b68-t9jz8
55413175770e7 c7d1297425461 3 minutes ago Exited kindnet-cni 1 19421dc217531 kindnet-x2g5d
56fd6254d6e1f 6e38f40d628db 3 minutes ago Exited storage-provisioner 1 ecfcbd5863212 storage-provisioner
b63a75f45416a 83f6cc407eed8 3 minutes ago Exited kube-proxy 1 9767a413586e7 kube-proxy-prf7f
4ba778c674f06 e3db313c6dbc0 3 minutes ago Exited kube-scheduler 1 34ac7e63ee514 kube-scheduler-multinode-859606
19f9d76e8f1cc 73deb9a3f7025 3 minutes ago Exited etcd 1 510b18b7b6d68 etcd-multinode-859606
fc27b85835028 d058aa5ab969c 3 minutes ago Exited kube-controller-manager 1 ed0cff49857f6 kube-controller-manager-multinode-859606
a49117d4a4c80 7fe0e6f37db33 3 minutes ago Exited kube-apiserver 1 5aa25d818283c kube-apiserver-multinode-859606
*
* ==> coredns [6784eb767633] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:56052 - 15722 "HINFO IN 8663818663549164460.3643203038294693926. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021662784s
*
* ==> coredns [abde5ad85d4a] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:52406 - 34433 "HINFO IN 7865527086462477606.3380958876542272888. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.061926124s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: multinode-859606
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-859606
kubernetes.io/os=linux
minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
minikube.k8s.io/name=multinode-859606
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_12_12T00_30_04_0700
minikube.k8s.io/version=v1.32.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 12 Dec 2023 00:29:59 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-859606
AcquireTime: <unset>
RenewTime: Tue, 12 Dec 2023 00:37:43 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 12 Dec 2023 00:37:09 +0000 Tue, 12 Dec 2023 00:29:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 12 Dec 2023 00:37:09 +0000 Tue, 12 Dec 2023 00:29:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 12 Dec 2023 00:37:09 +0000 Tue, 12 Dec 2023 00:29:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 12 Dec 2023 00:37:09 +0000 Tue, 12 Dec 2023 00:37:09 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.40
Hostname: multinode-859606
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: fa12b2faaeaf46879d88c9af881444f2
System UUID: fa12b2fa-aeaf-4687-9d88-c9af881444f2
Boot ID: 8cacd70d-3167-4874-8265-e7323653ef3f
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.7
Kubelet Version: v1.28.4
Kube-Proxy Version: v1.28.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-5bc68d56bd-8rtcm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m20s
kube-system coredns-5dd5756b68-t9jz8 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 7m29s
kube-system etcd-multinode-859606 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 7m41s
kube-system kindnet-x2g5d 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 7m29s
kube-system kube-apiserver-multinode-859606 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m43s
kube-system kube-controller-manager-multinode-859606 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m41s
kube-system kube-proxy-prf7f 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m29s
kube-system kube-scheduler-multinode-859606 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m43s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m27s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m27s kube-proxy
Normal Starting 39s kube-proxy
Normal Starting 3m33s kube-proxy
Normal Starting 7m50s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 7m50s (x8 over 7m50s) kubelet Node multinode-859606 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m50s (x8 over 7m50s) kubelet Node multinode-859606 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m50s (x7 over 7m50s) kubelet Node multinode-859606 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m50s kubelet Updated Node Allocatable limit across pods
Normal NodeAllocatableEnforced 7m42s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 7m42s kubelet Node multinode-859606 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m42s kubelet Node multinode-859606 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m42s kubelet Node multinode-859606 status is now: NodeHasSufficientPID
Normal Starting 7m42s kubelet Starting kubelet.
Normal RegisteredNode 7m29s node-controller Node multinode-859606 event: Registered Node multinode-859606 in Controller
Normal NodeReady 7m17s kubelet Node multinode-859606 status is now: NodeReady
Normal NodeHasNoDiskPressure 3m41s (x8 over 3m41s) kubelet Node multinode-859606 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 3m41s (x8 over 3m41s) kubelet Node multinode-859606 status is now: NodeHasSufficientMemory
Normal Starting 3m41s kubelet Starting kubelet.
Normal NodeHasSufficientPID 3m41s (x7 over 3m41s) kubelet Node multinode-859606 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m41s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 3m22s node-controller Node multinode-859606 event: Registered Node multinode-859606 in Controller
Normal Starting 49s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 49s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 48s (x8 over 49s) kubelet Node multinode-859606 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 48s (x8 over 49s) kubelet Node multinode-859606 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 48s (x7 over 49s) kubelet Node multinode-859606 status is now: NodeHasSufficientPID
Normal RegisteredNode 30s node-controller Node multinode-859606 event: Registered Node multinode-859606 in Controller
Name: multinode-859606-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-859606-m02
kubernetes.io/os=linux
minikube.k8s.io/commit=f155626207ae1ae93e2fd3ceb81b1e734028b5f4
minikube.k8s.io/name=multinode-859606
minikube.k8s.io/primary=false
minikube.k8s.io/updated_at=2023_12_12T00_35_40_0700
minikube.k8s.io/version=v1.32.0
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 12 Dec 2023 00:34:58 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-859606-m02
AcquireTime: <unset>
RenewTime: Tue, 12 Dec 2023 00:35:49 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 12 Dec 2023 00:35:08 +0000 Tue, 12 Dec 2023 00:34:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 12 Dec 2023 00:35:08 +0000 Tue, 12 Dec 2023 00:34:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 12 Dec 2023 00:35:08 +0000 Tue, 12 Dec 2023 00:34:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 12 Dec 2023 00:35:08 +0000 Tue, 12 Dec 2023 00:35:08 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.65
Hostname: multinode-859606-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 4890d00be799442695d20e2e29a3fb1a
System UUID: 4890d00b-e799-4426-95d2-0e2e29a3fb1a
Boot ID: 1604b089-1d92-4def-8405-ea47c499ea28
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.7
Kubelet Version: v1.28.4
Kube-Proxy Version: v1.28.4
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-5bc68d56bd-npwlc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m10s
kube-system kindnet-d4q52 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 6m34s
kube-system kube-proxy-q9h26 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m34s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 6m28s kube-proxy
Normal Starting 2m45s kube-proxy
Normal NodeHasNoDiskPressure 6m34s (x2 over 6m34s) kubelet Node multinode-859606-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m34s (x2 over 6m34s) kubelet Node multinode-859606-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m34s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m34s (x2 over 6m34s) kubelet Node multinode-859606-m02 status is now: NodeHasSufficientMemory
Normal Starting 6m34s kubelet Starting kubelet.
Normal NodeReady 6m22s kubelet Node multinode-859606-m02 status is now: NodeReady
Normal Starting 2m47s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m47s (x2 over 2m47s) kubelet Node multinode-859606-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m47s (x2 over 2m47s) kubelet Node multinode-859606-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m47s (x2 over 2m47s) kubelet Node multinode-859606-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m47s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 2m37s kubelet Node multinode-859606-m02 status is now: NodeReady
Normal RegisteredNode 30s node-controller Node multinode-859606-m02 event: Registered Node multinode-859606-m02 in Controller
*
* ==> dmesg <==
* [Dec12 00:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.069779] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.352877] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.446664] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.151166] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.741585] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.453899] systemd-fstab-generator[512]: Ignoring "noauto" for root device
[ +0.101582] systemd-fstab-generator[523]: Ignoring "noauto" for root device
[ +1.302480] systemd-fstab-generator[757]: Ignoring "noauto" for root device
[ +0.284963] systemd-fstab-generator[794]: Ignoring "noauto" for root device
[ +0.109891] systemd-fstab-generator[805]: Ignoring "noauto" for root device
[ +0.121787] systemd-fstab-generator[818]: Ignoring "noauto" for root device
[ +1.585692] systemd-fstab-generator[1007]: Ignoring "noauto" for root device
[ +0.117522] systemd-fstab-generator[1018]: Ignoring "noauto" for root device
[ +0.106548] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
[ +0.114704] systemd-fstab-generator[1040]: Ignoring "noauto" for root device
[ +0.119127] systemd-fstab-generator[1054]: Ignoring "noauto" for root device
[ +11.989836] systemd-fstab-generator[1306]: Ignoring "noauto" for root device
[ +0.395860] kauditd_printk_skb: 67 callbacks suppressed
*
* ==> etcd [19f9d76e8f1c] <==
* {"level":"info","ts":"2023-12-12T00:34:07.554415Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-12-12T00:34:08.814211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a is starting a new election at term 2"}
{"level":"info","ts":"2023-12-12T00:34:08.814409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became pre-candidate at term 2"}
{"level":"info","ts":"2023-12-12T00:34:08.814454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a received MsgPreVoteResp from 1088a855a4aa8d0a at term 2"}
{"level":"info","ts":"2023-12-12T00:34:08.814544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became candidate at term 3"}
{"level":"info","ts":"2023-12-12T00:34:08.81456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a received MsgVoteResp from 1088a855a4aa8d0a at term 3"}
{"level":"info","ts":"2023-12-12T00:34:08.814719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became leader at term 3"}
{"level":"info","ts":"2023-12-12T00:34:08.81475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1088a855a4aa8d0a elected leader 1088a855a4aa8d0a at term 3"}
{"level":"info","ts":"2023-12-12T00:34:08.817889Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-12-12T00:34:08.81793Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1088a855a4aa8d0a","local-member-attributes":"{Name:multinode-859606 ClientURLs:[https://192.168.39.40:2379]}","request-path":"/0/members/1088a855a4aa8d0a/attributes","cluster-id":"ca485a4cd00ef8c5","publish-timeout":"7s"}
{"level":"info","ts":"2023-12-12T00:34:08.818495Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-12-12T00:34:08.819786Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.40:2379"}
{"level":"info","ts":"2023-12-12T00:34:08.820582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-12-12T00:34:08.821374Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-12-12T00:34:08.821452Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-12-12T00:35:54.355768Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-12-12T00:35:54.355918Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-859606","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"]}
{"level":"warn","ts":"2023-12-12T00:35:54.35605Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2023-12-12T00:35:54.356144Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2023-12-12T00:35:54.378231Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.40:2379: use of closed network connection"}
{"level":"warn","ts":"2023-12-12T00:35:54.378359Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.40:2379: use of closed network connection"}
{"level":"info","ts":"2023-12-12T00:35:54.378407Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1088a855a4aa8d0a","current-leader-member-id":"1088a855a4aa8d0a"}
{"level":"info","ts":"2023-12-12T00:35:54.382889Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.40:2380"}
{"level":"info","ts":"2023-12-12T00:35:54.383001Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.40:2380"}
{"level":"info","ts":"2023-12-12T00:35:54.383016Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-859606","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"]}
*
* ==> etcd [acd573d2c57e] <==
* {"level":"info","ts":"2023-12-12T00:36:59.853853Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-12-12T00:36:59.853921Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-12-12T00:36:59.860383Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2023-12-12T00:36:59.86044Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-12-12T00:36:59.860447Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-12-12T00:36:59.86089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a switched to configuration voters=(1191387187227823370)"}
{"level":"info","ts":"2023-12-12T00:36:59.860956Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ca485a4cd00ef8c5","local-member-id":"1088a855a4aa8d0a","added-peer-id":"1088a855a4aa8d0a","added-peer-peer-urls":["https://192.168.39.40:2380"]}
{"level":"info","ts":"2023-12-12T00:36:59.861097Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ca485a4cd00ef8c5","local-member-id":"1088a855a4aa8d0a","cluster-version":"3.5"}
{"level":"info","ts":"2023-12-12T00:36:59.86112Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-12-12T00:36:59.862426Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.40:2380"}
{"level":"info","ts":"2023-12-12T00:36:59.862439Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.40:2380"}
{"level":"info","ts":"2023-12-12T00:37:01.295742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a is starting a new election at term 3"}
{"level":"info","ts":"2023-12-12T00:37:01.296113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became pre-candidate at term 3"}
{"level":"info","ts":"2023-12-12T00:37:01.296217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a received MsgPreVoteResp from 1088a855a4aa8d0a at term 3"}
{"level":"info","ts":"2023-12-12T00:37:01.296246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became candidate at term 4"}
{"level":"info","ts":"2023-12-12T00:37:01.29635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a received MsgVoteResp from 1088a855a4aa8d0a at term 4"}
{"level":"info","ts":"2023-12-12T00:37:01.296374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became leader at term 4"}
{"level":"info","ts":"2023-12-12T00:37:01.296534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1088a855a4aa8d0a elected leader 1088a855a4aa8d0a at term 4"}
{"level":"info","ts":"2023-12-12T00:37:01.300242Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1088a855a4aa8d0a","local-member-attributes":"{Name:multinode-859606 ClientURLs:[https://192.168.39.40:2379]}","request-path":"/0/members/1088a855a4aa8d0a/attributes","cluster-id":"ca485a4cd00ef8c5","publish-timeout":"7s"}
{"level":"info","ts":"2023-12-12T00:37:01.300337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-12-12T00:37:01.301147Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-12-12T00:37:01.301196Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-12-12T00:37:01.300444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-12-12T00:37:01.302471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-12-12T00:37:01.302781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.40:2379"}
*
* ==> kernel <==
* 00:37:45 up 1 min, 0 users, load average: 0.36, 0.15, 0.05
Linux multinode-859606 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kindnet [2b07939ba9ef] <==
* I1212 00:37:08.578043 1 main.go:102] connected to apiserver: https://10.96.0.1:443
I1212 00:37:08.578370 1 main.go:107] hostIP = 192.168.39.40
podIP = 192.168.39.40
I1212 00:37:08.578814 1 main.go:116] setting mtu 1500 for CNI
I1212 00:37:08.578863 1 main.go:146] kindnetd IP family: "ipv4"
I1212 00:37:08.578886 1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
I1212 00:37:09.268622 1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
I1212 00:37:09.268801 1 main.go:227] handling current node
I1212 00:37:09.269373 1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
I1212 00:37:09.269459 1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24]
I1212 00:37:09.270153 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.65 Flags: [] Table: 0}
I1212 00:37:19.282845 1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
I1212 00:37:19.283164 1 main.go:227] handling current node
I1212 00:37:19.283224 1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
I1212 00:37:19.283242 1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24]
I1212 00:37:29.296529 1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
I1212 00:37:29.296715 1 main.go:227] handling current node
I1212 00:37:29.296760 1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
I1212 00:37:29.296807 1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24]
I1212 00:37:39.311243 1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
I1212 00:37:39.311305 1 main.go:227] handling current node
I1212 00:37:39.311335 1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
I1212 00:37:39.311341 1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24]
*
* ==> kindnet [55413175770e] <==
* I1212 00:35:16.454929 1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
I1212 00:35:16.454984 1 main.go:227] handling current node
I1212 00:35:16.454995 1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
I1212 00:35:16.455001 1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24]
I1212 00:35:16.455337 1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
I1212 00:35:16.455389 1 main.go:250] Node multinode-859606-m03 has CIDR [10.244.3.0/24]
I1212 00:35:26.471853 1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
I1212 00:35:26.471968 1 main.go:227] handling current node
I1212 00:35:26.472088 1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
I1212 00:35:26.472097 1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24]
I1212 00:35:26.472358 1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
I1212 00:35:26.472371 1 main.go:250] Node multinode-859606-m03 has CIDR [10.244.3.0/24]
I1212 00:35:36.487874 1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
I1212 00:35:36.488360 1 main.go:227] handling current node
I1212 00:35:36.488546 1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
I1212 00:35:36.488629 1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24]
I1212 00:35:36.488840 1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
I1212 00:35:36.488925 1 main.go:250] Node multinode-859606-m03 has CIDR [10.244.3.0/24]
I1212 00:35:46.494897 1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
I1212 00:35:46.495056 1 main.go:227] handling current node
I1212 00:35:46.495149 1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
I1212 00:35:46.495206 1 main.go:250] Node multinode-859606-m02 has CIDR [10.244.1.0/24]
I1212 00:35:46.495503 1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
I1212 00:35:46.495589 1 main.go:250] Node multinode-859606-m03 has CIDR [10.244.2.0/24]
I1212 00:35:46.495749 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.13 Flags: [] Table: 0}
*
* ==> kube-apiserver [1ebf2246a188] <==
* I1212 00:37:02.702627 1 controller.go:116] Starting legacy_token_tracking_controller
I1212 00:37:02.702671 1 shared_informer.go:311] Waiting for caches to sync for configmaps
I1212 00:37:02.736556 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1212 00:37:02.736761 1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
I1212 00:37:02.802710 1 shared_informer.go:318] Caches are synced for configmaps
I1212 00:37:02.834500 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
I1212 00:37:02.837240 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1212 00:37:02.838205 1 shared_informer.go:318] Caches are synced for crd-autoregister
I1212 00:37:02.838392 1 aggregator.go:166] initial CRD sync complete...
I1212 00:37:02.838579 1 autoregister_controller.go:141] Starting autoregister controller
I1212 00:37:02.838677 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1212 00:37:02.838752 1 cache.go:39] Caches are synced for autoregister controller
I1212 00:37:02.872198 1 shared_informer.go:318] Caches are synced for node_authorizer
I1212 00:37:02.888412 1 apf_controller.go:377] Running API Priority and Fairness config worker
I1212 00:37:02.888626 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
I1212 00:37:02.897702 1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
I1212 00:37:02.897799 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1212 00:37:03.694700 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W1212 00:37:04.133856 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.40]
I1212 00:37:04.135091 1 controller.go:624] quota admission added evaluator for: endpoints
I1212 00:37:05.819657 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
I1212 00:37:06.057500 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I1212 00:37:06.070583 1 controller.go:624] quota admission added evaluator for: deployments.apps
I1212 00:37:06.149126 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1212 00:37:06.158341 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
*
* ==> kube-apiserver [a49117d4a4c8] <==
* W1212 00:36:03.749168 1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:03.755376 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:03.761444 1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:03.789634 1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:03.826969 1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:03.835973 1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:03.858980 1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:03.863797 1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:03.896847 1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:03.946166 1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.009435 1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.038722 1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.070132 1 logging.go:59] [core] [Channel #184 SubChannel #185] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.079651 1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.119604 1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.130181 1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.136164 1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.136486 1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.243597 1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.254632 1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.304649 1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.318605 1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.327720 1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.363477 1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1212 00:36:04.392883 1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
*
* ==> kube-controller-manager [8699415e5935] <==
* I1212 00:37:15.188062 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-859606-m02\" does not exist"
I1212 00:37:15.193974 1 shared_informer.go:318] Caches are synced for TTL
I1212 00:37:15.195602 1 shared_informer.go:318] Caches are synced for GC
I1212 00:37:15.211806 1 shared_informer.go:318] Caches are synced for persistent volume
I1212 00:37:15.219063 1 shared_informer.go:318] Caches are synced for resource quota
I1212 00:37:15.230936 1 shared_informer.go:318] Caches are synced for daemon sets
I1212 00:37:15.241063 1 shared_informer.go:318] Caches are synced for attach detach
I1212 00:37:15.249064 1 shared_informer.go:318] Caches are synced for endpoint_slice
I1212 00:37:15.284093 1 shared_informer.go:318] Caches are synced for taint
I1212 00:37:15.285082 1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
I1212 00:37:15.285495 1 taint_manager.go:205] "Starting NoExecuteTaintManager"
I1212 00:37:15.285713 1 taint_manager.go:210] "Sending events to api server"
I1212 00:37:15.286437 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-859606"
I1212 00:37:15.286709 1 shared_informer.go:318] Caches are synced for node
I1212 00:37:15.286903 1 range_allocator.go:174] "Sending events to api server"
I1212 00:37:15.286973 1 range_allocator.go:178] "Starting range CIDR allocator"
I1212 00:37:15.287128 1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
I1212 00:37:15.287247 1 shared_informer.go:318] Caches are synced for cidrallocator
I1212 00:37:15.287228 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-859606-m02"
I1212 00:37:15.287677 1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
I1212 00:37:15.290093 1 event.go:307] "Event occurred" object="multinode-859606" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-859606 event: Registered Node multinode-859606 in Controller"
I1212 00:37:15.290280 1 event.go:307] "Event occurred" object="multinode-859606-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-859606-m02 event: Registered Node multinode-859606-m02 in Controller"
I1212 00:37:15.627185 1 shared_informer.go:318] Caches are synced for garbage collector
I1212 00:37:15.627247 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
I1212 00:37:15.644407 1 shared_informer.go:318] Caches are synced for garbage collector
*
* ==> kube-controller-manager [fc27b8583502] <==
* I1212 00:35:08.532461 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-lr9gw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-lr9gw"
I1212 00:35:13.179054 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.824µs"
I1212 00:35:13.284393 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="196.544µs"
I1212 00:35:13.290737 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="93.165µs"
I1212 00:35:36.003626 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-npwlc"
I1212 00:35:36.011695 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.300235ms"
I1212 00:35:36.026847 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.01405ms"
I1212 00:35:36.027555 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="655.698µs"
I1212 00:35:36.045674 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.429µs"
I1212 00:35:37.908609 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="13.802394ms"
I1212 00:35:37.908718 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.044µs"
I1212 00:35:38.012991 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-859606-m02"
I1212 00:35:38.539106 1 event.go:307] "Event occurred" object="multinode-859606-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-859606-m03 event: Removing Node multinode-859606-m03 from Controller"
I1212 00:35:38.916634 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-859606-m03\" does not exist"
I1212 00:35:38.918930 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-859606-m02"
I1212 00:35:38.921523 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-jrfh4" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-jrfh4"
I1212 00:35:38.946516 1 range_allocator.go:380] "Set node PodCIDR" node="multinode-859606-m03" podCIDRs=["10.244.2.0/24"]
I1212 00:35:39.773060 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.621µs"
I1212 00:35:40.055003 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.052µs"
I1212 00:35:40.062833 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.141µs"
I1212 00:35:40.066206 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.721µs"
I1212 00:35:43.539946 1 event.go:307] "Event occurred" object="multinode-859606-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-859606-m03 event: Registered Node multinode-859606-m03 in Controller"
I1212 00:35:50.130971 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-859606-m03"
I1212 00:35:52.529054 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-859606-m02"
I1212 00:35:53.541785 1 event.go:307] "Event occurred" object="multinode-859606-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-859606-m03 event: Removing Node multinode-859606-m03 from Controller"
*
* ==> kube-proxy [810342f9e6bb] <==
* I1212 00:37:05.425583 1 server_others.go:69] "Using iptables proxy"
I1212 00:37:05.461684 1 node.go:141] Successfully retrieved node IP: 192.168.39.40
I1212 00:37:05.600627 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I1212 00:37:05.600673 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1212 00:37:05.604235 1 server_others.go:152] "Using iptables Proxier"
I1212 00:37:05.605144 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I1212 00:37:05.605600 1 server.go:846] "Version info" version="v1.28.4"
I1212 00:37:05.605643 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1212 00:37:05.607215 1 config.go:188] "Starting service config controller"
I1212 00:37:05.607603 1 shared_informer.go:311] Waiting for caches to sync for service config
I1212 00:37:05.607741 1 config.go:97] "Starting endpoint slice config controller"
I1212 00:37:05.607777 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I1212 00:37:05.611875 1 config.go:315] "Starting node config controller"
I1212 00:37:05.611943 1 shared_informer.go:311] Waiting for caches to sync for node config
I1212 00:37:05.708678 1 shared_informer.go:318] Caches are synced for endpoint slice config
I1212 00:37:05.708741 1 shared_informer.go:318] Caches are synced for service config
I1212 00:37:05.716211 1 shared_informer.go:318] Caches are synced for node config
*
* ==> kube-proxy [b63a75f45416] <==
* I1212 00:34:11.753699 1 server_others.go:69] "Using iptables proxy"
I1212 00:34:11.786606 1 node.go:141] Successfully retrieved node IP: 192.168.39.40
I1212 00:34:11.853481 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I1212 00:34:11.853530 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1212 00:34:11.855858 1 server_others.go:152] "Using iptables Proxier"
I1212 00:34:11.856499 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I1212 00:34:11.856927 1 server.go:846] "Version info" version="v1.28.4"
I1212 00:34:11.856966 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1212 00:34:11.858681 1 config.go:188] "Starting service config controller"
I1212 00:34:11.859224 1 shared_informer.go:311] Waiting for caches to sync for service config
I1212 00:34:11.859381 1 config.go:315] "Starting node config controller"
I1212 00:34:11.859414 1 shared_informer.go:311] Waiting for caches to sync for node config
I1212 00:34:11.859947 1 config.go:97] "Starting endpoint slice config controller"
I1212 00:34:11.859982 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I1212 00:34:11.959936 1 shared_informer.go:318] Caches are synced for node config
I1212 00:34:11.959988 1 shared_informer.go:318] Caches are synced for service config
I1212 00:34:11.961091 1 shared_informer.go:318] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [407d7ddb6422] <==
* I1212 00:37:00.229914 1 serving.go:348] Generated self-signed cert in-memory
W1212 00:37:02.799108 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1212 00:37:02.799169 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1212 00:37:02.799183 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W1212 00:37:02.799190 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1212 00:37:02.854155 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
I1212 00:37:02.857110 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1212 00:37:02.866129 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1212 00:37:02.869202 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1212 00:37:02.870581 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I1212 00:37:02.872823 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1212 00:37:02.970156 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [4ba778c674f0] <==
* I1212 00:34:08.393863 1 serving.go:348] Generated self-signed cert in-memory
W1212 00:34:10.311795 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1212 00:34:10.311894 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1212 00:34:10.311915 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W1212 00:34:10.312041 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1212 00:34:10.359778 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
I1212 00:34:10.359832 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1212 00:34:10.362119 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1212 00:34:10.362731 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1212 00:34:10.363426 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I1212 00:34:10.363524 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1212 00:34:10.463812 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1212 00:35:54.270484 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
I1212 00:35:54.270615 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I1212 00:35:54.271035 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E1212 00:35:54.271407 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Tue 2023-12-12 00:36:31 UTC, ends at Tue 2023-12-12 00:37:46 UTC. --
Dec 12 00:37:03 multinode-859606 kubelet[1312]: E1212 00:37:03.774931 1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk podName:e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:04.274913614 +0000 UTC m=+7.905251962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wgdzk" (UniqueName: "kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk") pod "busybox-5bc68d56bd-8rtcm" (UID: "e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2") : object "default"/"kube-root-ca.crt" not registered
Dec 12 00:37:04 multinode-859606 kubelet[1312]: E1212 00:37:04.249069 1312 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Dec 12 00:37:04 multinode-859606 kubelet[1312]: E1212 00:37:04.249160 1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume podName:3605a003-e8d6-46b2-8fe7-f45647656622 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:05.249144334 +0000 UTC m=+8.879482697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume") pod "coredns-5dd5756b68-t9jz8" (UID: "3605a003-e8d6-46b2-8fe7-f45647656622") : object "kube-system"/"coredns" not registered
Dec 12 00:37:04 multinode-859606 kubelet[1312]: E1212 00:37:04.349652 1312 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Dec 12 00:37:04 multinode-859606 kubelet[1312]: E1212 00:37:04.349712 1312 projected.go:198] Error preparing data for projected volume kube-api-access-wgdzk for pod default/busybox-5bc68d56bd-8rtcm: object "default"/"kube-root-ca.crt" not registered
Dec 12 00:37:04 multinode-859606 kubelet[1312]: E1212 00:37:04.349766 1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk podName:e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:05.349752141 +0000 UTC m=+8.980090501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wgdzk" (UniqueName: "kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk") pod "busybox-5bc68d56bd-8rtcm" (UID: "e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2") : object "default"/"kube-root-ca.crt" not registered
Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.260507 1312 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.261205 1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume podName:3605a003-e8d6-46b2-8fe7-f45647656622 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:07.261182355 +0000 UTC m=+10.891520705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume") pod "coredns-5dd5756b68-t9jz8" (UID: "3605a003-e8d6-46b2-8fe7-f45647656622") : object "kube-system"/"coredns" not registered
Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.367205 1312 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.367244 1312 projected.go:198] Error preparing data for projected volume kube-api-access-wgdzk for pod default/busybox-5bc68d56bd-8rtcm: object "default"/"kube-root-ca.crt" not registered
Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.367339 1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk podName:e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:07.367321008 +0000 UTC m=+10.997659370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wgdzk" (UniqueName: "kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk") pod "busybox-5bc68d56bd-8rtcm" (UID: "e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2") : object "default"/"kube-root-ca.crt" not registered
Dec 12 00:37:05 multinode-859606 kubelet[1312]: I1212 00:37:05.521465 1312 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ad9a474f7aa2a0c235a5125ee5afda9726fe7b702b1ec852e4ae79591c7981"
Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.696475 1312 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-t9jz8" podUID="3605a003-e8d6-46b2-8fe7-f45647656622"
Dec 12 00:37:05 multinode-859606 kubelet[1312]: E1212 00:37:05.697277 1312 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-8rtcm" podUID="e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2"
Dec 12 00:37:07 multinode-859606 kubelet[1312]: E1212 00:37:07.285269 1312 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Dec 12 00:37:07 multinode-859606 kubelet[1312]: E1212 00:37:07.285363 1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume podName:3605a003-e8d6-46b2-8fe7-f45647656622 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:11.285344975 +0000 UTC m=+14.915683323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3605a003-e8d6-46b2-8fe7-f45647656622-config-volume") pod "coredns-5dd5756b68-t9jz8" (UID: "3605a003-e8d6-46b2-8fe7-f45647656622") : object "kube-system"/"coredns" not registered
Dec 12 00:37:07 multinode-859606 kubelet[1312]: E1212 00:37:07.386358 1312 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Dec 12 00:37:07 multinode-859606 kubelet[1312]: E1212 00:37:07.386406 1312 projected.go:198] Error preparing data for projected volume kube-api-access-wgdzk for pod default/busybox-5bc68d56bd-8rtcm: object "default"/"kube-root-ca.crt" not registered
Dec 12 00:37:07 multinode-859606 kubelet[1312]: E1212 00:37:07.386451 1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk podName:e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2 nodeName:}" failed. No retries permitted until 2023-12-12 00:37:11.386438304 +0000 UTC m=+15.016776664 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wgdzk" (UniqueName: "kubernetes.io/projected/e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2-kube-api-access-wgdzk") pod "busybox-5bc68d56bd-8rtcm" (UID: "e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2") : object "default"/"kube-root-ca.crt" not registered
Dec 12 00:37:07 multinode-859606 kubelet[1312]: I1212 00:37:07.932205 1312 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a871816c58a42ddd362fd89fa0457159c939b88d434669ab9c87303a2cdce4ea"
Dec 12 00:37:09 multinode-859606 kubelet[1312]: E1212 00:37:09.037465 1312 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-t9jz8" podUID="3605a003-e8d6-46b2-8fe7-f45647656622"
Dec 12 00:37:09 multinode-859606 kubelet[1312]: E1212 00:37:09.038182 1312 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-8rtcm" podUID="e2ee133c-b8b8-4f97-a8ac-5f9ca47e6ff2"
Dec 12 00:37:09 multinode-859606 kubelet[1312]: I1212 00:37:09.525142 1312 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Dec 12 00:37:12 multinode-859606 kubelet[1312]: I1212 00:37:12.550452 1312 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d54ee5c24673d29c1697cc6ea65d3e7ff3e3a6bd5430a949d8748c099c864ebe"
Dec 12 00:37:12 multinode-859606 kubelet[1312]: I1212 00:37:12.635414 1312 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf1f78f2e3a90cc24f70123b2504134a6d0123ff6370d1bc64ce6dfdb1255ca3"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-859606 -n multinode-859606
helpers_test.go:261: (dbg) Run: kubectl --context multinode-859606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (87.10s)