=== RUN TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-773885
multinode_test.go:288: (dbg) Run: out/minikube-linux-amd64 stop -p multinode-773885
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-773885: (28.494785999s)
multinode_test.go:293: (dbg) Run: out/minikube-linux-amd64 start -p multinode-773885 --wait=true -v=8 --alsologtostderr
E0223 22:21:14.560588 66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:21:42.244313 66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/ingress-addon-legacy-633033/client.crt: no such file or directory
E0223 22:21:48.831678 66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/addons-476957/client.crt: no such file or directory
E0223 22:22:35.338149 66927 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/functional-053497/client.crt: no such file or directory
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-773885 --wait=true -v=8 --alsologtostderr: exit status 90 (1m23.230284364s)
-- stdout --
* [multinode-773885] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15909
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting control plane node multinode-773885 in cluster multinode-773885
* Restarting existing kvm2 VM for "multinode-773885" ...
* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
* Configuring CNI (Container Networking Interface) ...
* Enabled addons:
* Verifying Kubernetes components...
* Starting worker node multinode-773885-m02 in cluster multinode-773885
* Restarting existing kvm2 VM for "multinode-773885-m02" ...
* Found network options:
- NO_PROXY=192.168.39.240
-- /stdout --
** stderr **
I0223 22:21:13.262206 80620 out.go:296] Setting OutFile to fd 1 ...
I0223 22:21:13.262485 80620 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 22:21:13.262530 80620 out.go:309] Setting ErrFile to fd 2...
I0223 22:21:13.262547 80620 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 22:21:13.263007 80620 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
I0223 22:21:13.263577 80620 out.go:303] Setting JSON to false
I0223 22:21:13.264336 80620 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7426,"bootTime":1677183448,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0223 22:21:13.264396 80620 start.go:135] virtualization: kvm guest
I0223 22:21:13.267622 80620 out.go:177] * [multinode-773885] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0223 22:21:13.268914 80620 out.go:177] - MINIKUBE_LOCATION=15909
I0223 22:21:13.268968 80620 notify.go:220] Checking for updates...
I0223 22:21:13.270444 80620 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0223 22:21:13.271889 80620 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
I0223 22:21:13.273288 80620 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
I0223 22:21:13.274630 80620 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0223 22:21:13.275971 80620 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0223 22:21:13.277689 80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 22:21:13.277751 80620 driver.go:365] Setting default libvirt URI to qemu:///system
I0223 22:21:13.278270 80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 22:21:13.278328 80620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 22:21:13.292096 80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
I0223 22:21:13.292502 80620 main.go:141] libmachine: () Calling .GetVersion
I0223 22:21:13.293077 80620 main.go:141] libmachine: Using API Version 1
I0223 22:21:13.293100 80620 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 22:21:13.293421 80620 main.go:141] libmachine: () Calling .GetMachineName
I0223 22:21:13.293604 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:13.326142 80620 out.go:177] * Using the kvm2 driver based on existing profile
I0223 22:21:13.327601 80620 start.go:296] selected driver: kvm2
I0223 22:21:13.327615 80620 start.go:857] validating driver "kvm2" against &{Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacce
l:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP:}
I0223 22:21:13.327745 80620 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0223 22:21:13.327989 80620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 22:21:13.328051 80620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-59858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0223 22:21:13.341443 80620 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0223 22:21:13.342073 80620 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0223 22:21:13.342106 80620 cni.go:84] Creating CNI manager for ""
I0223 22:21:13.342116 80620 cni.go:136] 3 nodes found, recommending kindnet
I0223 22:21:13.342128 80620 start_flags.go:319] config:
{Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false ko
ng:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 22:21:13.342256 80620 iso.go:125] acquiring lock: {Name:mka4f25d544a3ff8c2a2fab814177dd4b23f9fc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 22:21:13.344079 80620 out.go:177] * Starting control plane node multinode-773885 in cluster multinode-773885
I0223 22:21:13.345362 80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 22:21:13.345394 80620 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0223 22:21:13.345409 80620 cache.go:57] Caching tarball of preloaded images
I0223 22:21:13.345481 80620 preload.go:174] Found /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0223 22:21:13.345493 80620 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0223 22:21:13.345663 80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
I0223 22:21:13.345836 80620 cache.go:193] Successfully downloaded all kic artifacts
I0223 22:21:13.345858 80620 start.go:364] acquiring machines lock for multinode-773885: {Name:mk190e887b13a8e75fbaa786555e3f621b6db823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0223 22:21:13.345897 80620 start.go:368] acquired machines lock for "multinode-773885" in 21.539µs
I0223 22:21:13.345910 80620 start.go:96] Skipping create...Using existing machine configuration
I0223 22:21:13.345916 80620 fix.go:55] fixHost starting:
I0223 22:21:13.346182 80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 22:21:13.346210 80620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 22:21:13.358898 80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
I0223 22:21:13.359326 80620 main.go:141] libmachine: () Calling .GetVersion
I0223 22:21:13.359874 80620 main.go:141] libmachine: Using API Version 1
I0223 22:21:13.359895 80620 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 22:21:13.360176 80620 main.go:141] libmachine: () Calling .GetMachineName
I0223 22:21:13.360338 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:13.360464 80620 main.go:141] libmachine: (multinode-773885) Calling .GetState
I0223 22:21:13.361968 80620 fix.go:103] recreateIfNeeded on multinode-773885: state=Stopped err=<nil>
I0223 22:21:13.361991 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
W0223 22:21:13.362122 80620 fix.go:129] unexpected machine state, will restart: <nil>
I0223 22:21:13.364431 80620 out.go:177] * Restarting existing kvm2 VM for "multinode-773885" ...
I0223 22:21:13.365638 80620 main.go:141] libmachine: (multinode-773885) Calling .Start
I0223 22:21:13.365789 80620 main.go:141] libmachine: (multinode-773885) Ensuring networks are active...
I0223 22:21:13.366413 80620 main.go:141] libmachine: (multinode-773885) Ensuring network default is active
I0223 22:21:13.366726 80620 main.go:141] libmachine: (multinode-773885) Ensuring network mk-multinode-773885 is active
I0223 22:21:13.367088 80620 main.go:141] libmachine: (multinode-773885) Getting domain xml...
I0223 22:21:13.367766 80620 main.go:141] libmachine: (multinode-773885) Creating domain...
I0223 22:21:14.564410 80620 main.go:141] libmachine: (multinode-773885) Waiting to get IP...
I0223 22:21:14.565318 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:14.565709 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:14.565811 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:14.565729 80650 retry.go:31] will retry after 216.926568ms: waiting for machine to come up
I0223 22:21:14.784224 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:14.784682 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:14.784711 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:14.784633 80650 retry.go:31] will retry after 249.246042ms: waiting for machine to come up
I0223 22:21:15.035098 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:15.035423 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:15.035451 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.035397 80650 retry.go:31] will retry after 334.153469ms: waiting for machine to come up
I0223 22:21:15.370820 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:15.371326 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:15.371360 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.371252 80650 retry.go:31] will retry after 394.396319ms: waiting for machine to come up
I0223 22:21:15.766773 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:15.767259 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:15.767292 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.767204 80650 retry.go:31] will retry after 580.71112ms: waiting for machine to come up
I0223 22:21:16.350049 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:16.350438 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:16.350468 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:16.350387 80650 retry.go:31] will retry after 812.475241ms: waiting for machine to come up
I0223 22:21:17.164302 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:17.164761 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:17.164794 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:17.164713 80650 retry.go:31] will retry after 1.090615613s: waiting for machine to come up
I0223 22:21:18.257489 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:18.257882 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:18.257949 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:18.257850 80650 retry.go:31] will retry after 1.207436911s: waiting for machine to come up
I0223 22:21:19.467391 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:19.467804 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:19.467836 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:19.467758 80650 retry.go:31] will retry after 1.522373862s: waiting for machine to come up
I0223 22:21:20.992569 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:20.992936 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:20.992965 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:20.992883 80650 retry.go:31] will retry after 2.133891724s: waiting for machine to come up
I0223 22:21:23.129156 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:23.129626 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:23.129648 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:23.129597 80650 retry.go:31] will retry after 2.398257467s: waiting for machine to come up
I0223 22:21:25.529031 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:25.529472 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:25.529508 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:25.529418 80650 retry.go:31] will retry after 2.616816039s: waiting for machine to come up
I0223 22:21:28.149307 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:28.149703 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:28.149732 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:28.149668 80650 retry.go:31] will retry after 3.093858159s: waiting for machine to come up
I0223 22:21:31.245491 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.245970 80620 main.go:141] libmachine: (multinode-773885) Found IP for machine: 192.168.39.240
I0223 22:21:31.245992 80620 main.go:141] libmachine: (multinode-773885) Reserving static IP address...
I0223 22:21:31.246035 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has current primary IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.246498 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "multinode-773885", mac: "52:54:00:77:a9:85", ip: "192.168.39.240"} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.246523 80620 main.go:141] libmachine: (multinode-773885) DBG | skip adding static IP to network mk-multinode-773885 - found existing host DHCP lease matching {name: "multinode-773885", mac: "52:54:00:77:a9:85", ip: "192.168.39.240"}
I0223 22:21:31.246531 80620 main.go:141] libmachine: (multinode-773885) Reserved static IP address: 192.168.39.240
I0223 22:21:31.246540 80620 main.go:141] libmachine: (multinode-773885) Waiting for SSH to be available...
I0223 22:21:31.246549 80620 main.go:141] libmachine: (multinode-773885) DBG | Getting to WaitForSSH function...
I0223 22:21:31.248477 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.248821 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.248848 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.248945 80620 main.go:141] libmachine: (multinode-773885) DBG | Using SSH client type: external
I0223 22:21:31.248970 80620 main.go:141] libmachine: (multinode-773885) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa (-rw-------)
I0223 22:21:31.249043 80620 main.go:141] libmachine: (multinode-773885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa -p 22] /usr/bin/ssh <nil>}
I0223 22:21:31.249076 80620 main.go:141] libmachine: (multinode-773885) DBG | About to run SSH command:
I0223 22:21:31.249094 80620 main.go:141] libmachine: (multinode-773885) DBG | exit 0
I0223 22:21:31.338971 80620 main.go:141] libmachine: (multinode-773885) DBG | SSH cmd err, output: <nil>:
I0223 22:21:31.339315 80620 main.go:141] libmachine: (multinode-773885) Calling .GetConfigRaw
I0223 22:21:31.339952 80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
I0223 22:21:31.342708 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.343091 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.343112 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.343382 80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
I0223 22:21:31.343587 80620 machine.go:88] provisioning docker machine ...
I0223 22:21:31.343612 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:31.343856 80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
I0223 22:21:31.344026 80620 buildroot.go:166] provisioning hostname "multinode-773885"
I0223 22:21:31.344045 80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
I0223 22:21:31.344189 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:31.346343 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.346741 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.346772 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.346912 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:31.347101 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.347235 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.347362 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:31.347563 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:31.347987 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:31.348001 80620 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-773885 && echo "multinode-773885" | sudo tee /etc/hostname
I0223 22:21:31.483698 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773885
I0223 22:21:31.483729 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:31.486353 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.486705 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.486729 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.486927 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:31.487146 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.487349 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.487567 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:31.487765 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:31.488223 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:31.488247 80620 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-773885' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773885/g' /etc/hosts;
else
echo '127.0.1.1 multinode-773885' | sudo tee -a /etc/hosts;
fi
fi
I0223 22:21:31.610531 80620 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 22:21:31.610563 80620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-59858/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-59858/.minikube}
I0223 22:21:31.610579 80620 buildroot.go:174] setting up certificates
I0223 22:21:31.610589 80620 provision.go:83] configureAuth start
I0223 22:21:31.610602 80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
I0223 22:21:31.610887 80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
I0223 22:21:31.613554 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.613875 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.613901 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.614087 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:31.616271 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.616732 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.616766 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.616828 80620 provision.go:138] copyHostCerts
I0223 22:21:31.616880 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
I0223 22:21:31.616925 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem, removing ...
I0223 22:21:31.616938 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
I0223 22:21:31.617049 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem (1078 bytes)
I0223 22:21:31.617142 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
I0223 22:21:31.617171 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem, removing ...
I0223 22:21:31.617182 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
I0223 22:21:31.617225 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem (1123 bytes)
I0223 22:21:31.617338 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
I0223 22:21:31.617367 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem, removing ...
I0223 22:21:31.617373 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
I0223 22:21:31.617412 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem (1671 bytes)
I0223 22:21:31.617475 80620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem org=jenkins.multinode-773885 san=[192.168.39.240 192.168.39.240 localhost 127.0.0.1 minikube multinode-773885]
I0223 22:21:31.813280 80620 provision.go:172] copyRemoteCerts
I0223 22:21:31.813353 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0223 22:21:31.813402 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:31.816285 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.816679 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.816716 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.816918 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:31.817162 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.817351 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:31.817481 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
I0223 22:21:31.903913 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0223 22:21:31.904023 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0223 22:21:31.928843 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem -> /etc/docker/server.pem
I0223 22:21:31.928908 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0223 22:21:31.953083 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0223 22:21:31.953136 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0223 22:21:31.977825 80620 provision.go:86] duration metric: configureAuth took 367.222576ms
I0223 22:21:31.977848 80620 buildroot.go:189] setting minikube options for container-runtime
I0223 22:21:31.978069 80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 22:21:31.978096 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:31.978344 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:31.980808 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.981196 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.981226 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.981404 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:31.981631 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.981794 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.981903 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:31.982052 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:31.982469 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:31.982488 80620 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0223 22:21:32.100345 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0223 22:21:32.100366 80620 buildroot.go:70] root file system type: tmpfs
I0223 22:21:32.100467 80620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0223 22:21:32.100489 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:32.103003 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:32.103407 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:32.103436 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:32.103637 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:32.103824 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:32.103965 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:32.104148 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:32.104371 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:32.104858 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:32.104953 80620 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0223 22:21:32.237312 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0223 22:21:32.237343 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:32.240081 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:32.240430 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:32.240481 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:32.240599 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:32.240764 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:32.240928 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:32.241022 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:32.241158 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:32.241558 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:32.241575 80620 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0223 22:21:33.112176 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0223 22:21:33.112206 80620 machine.go:91] provisioned docker machine in 1.76860164s
I0223 22:21:33.112216 80620 start.go:300] post-start starting for "multinode-773885" (driver="kvm2")
I0223 22:21:33.112222 80620 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0223 22:21:33.112238 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:33.112595 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0223 22:21:33.112636 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:33.115711 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.116122 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:33.116159 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.116274 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:33.116476 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:33.116715 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:33.116933 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
I0223 22:21:33.204860 80620 ssh_runner.go:195] Run: cat /etc/os-release
I0223 22:21:33.208799 80620 command_runner.go:130] > NAME=Buildroot
I0223 22:21:33.208819 80620 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
I0223 22:21:33.208823 80620 command_runner.go:130] > ID=buildroot
I0223 22:21:33.208829 80620 command_runner.go:130] > VERSION_ID=2021.02.12
I0223 22:21:33.208833 80620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0223 22:21:33.208858 80620 info.go:137] Remote host: Buildroot 2021.02.12
I0223 22:21:33.208867 80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/addons for local assets ...
I0223 22:21:33.208924 80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/files for local assets ...
I0223 22:21:33.208996 80620 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> 669272.pem in /etc/ssl/certs
I0223 22:21:33.209017 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /etc/ssl/certs/669272.pem
I0223 22:21:33.209096 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0223 22:21:33.216834 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /etc/ssl/certs/669272.pem (1708 bytes)
I0223 22:21:33.238598 80620 start.go:303] post-start completed in 126.369412ms
I0223 22:21:33.238618 80620 fix.go:57] fixHost completed within 19.892701007s
I0223 22:21:33.238638 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:33.241628 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.242000 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:33.242020 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.242184 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:33.242377 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:33.242544 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:33.242697 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:33.242867 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:33.243253 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:33.243264 80620 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0223 22:21:33.359558 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677190893.310436860
I0223 22:21:33.359587 80620 fix.go:207] guest clock: 1677190893.310436860
I0223 22:21:33.359596 80620 fix.go:220] Guest: 2023-02-23 22:21:33.31043686 +0000 UTC Remote: 2023-02-23 22:21:33.238622371 +0000 UTC m=+20.014549698 (delta=71.814489ms)
I0223 22:21:33.359621 80620 fix.go:191] guest clock delta is within tolerance: 71.814489ms
I0223 22:21:33.359628 80620 start.go:83] releasing machines lock for "multinode-773885", held for 20.013722401s
I0223 22:21:33.359654 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:33.359925 80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
I0223 22:21:33.362448 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.362830 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:33.362872 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.362979 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:33.363495 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:33.363673 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:33.363761 80620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0223 22:21:33.363798 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:33.363978 80620 ssh_runner.go:195] Run: cat /version.json
I0223 22:21:33.364008 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:33.366567 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.366853 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.366894 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:33.366918 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.367103 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:33.367284 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:33.367338 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:33.367363 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.367483 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:33.367511 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:33.367637 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
I0223 22:21:33.367796 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:33.367946 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:33.368088 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
I0223 22:21:33.472525 80620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0223 22:21:33.472587 80620 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1675980448-15752", "minikube_version": "v1.29.0", "commit": "cf7ad99382c4b89a2ffa286b1101797332265ce3"}
I0223 22:21:33.472717 80620 ssh_runner.go:195] Run: systemctl --version
I0223 22:21:33.478170 80620 command_runner.go:130] > systemd 247 (247)
I0223 22:21:33.478214 80620 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I0223 22:21:33.478449 80620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0223 22:21:33.483322 80620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0223 22:21:33.483517 80620 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0223 22:21:33.483559 80620 ssh_runner.go:195] Run: which cri-dockerd
I0223 22:21:33.486877 80620 command_runner.go:130] > /usr/bin/cri-dockerd
I0223 22:21:33.486963 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0223 22:21:33.494937 80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0223 22:21:33.509789 80620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0223 22:21:33.522704 80620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0223 22:21:33.523037 80620 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0223 22:21:33.523053 80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 22:21:33.523114 80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 22:21:33.547334 80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0223 22:21:33.547357 80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0223 22:21:33.547366 80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0223 22:21:33.547373 80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0223 22:21:33.547379 80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0223 22:21:33.547386 80620 command_runner.go:130] > registry.k8s.io/pause:3.9
I0223 22:21:33.547393 80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
I0223 22:21:33.547402 80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0223 22:21:33.547409 80620 command_runner.go:130] > registry.k8s.io/pause:3.6
I0223 22:21:33.547429 80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0223 22:21:33.547437 80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0223 22:21:33.548840 80620 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0223 22:21:33.548856 80620 docker.go:560] Images already preloaded, skipping extraction
I0223 22:21:33.548865 80620 start.go:485] detecting cgroup driver to use...
I0223 22:21:33.548962 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 22:21:33.565249 80620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0223 22:21:33.565271 80620 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0223 22:21:33.565339 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0223 22:21:33.574475 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0223 22:21:33.582936 80620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0223 22:21:33.582977 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0223 22:21:33.591609 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 22:21:33.600301 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0223 22:21:33.608920 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 22:21:33.617470 80620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0223 22:21:33.626224 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0223 22:21:33.634536 80620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0223 22:21:33.642631 80620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0223 22:21:33.642679 80620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0223 22:21:33.650322 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:21:33.748276 80620 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 22:21:33.765231 80620 start.go:485] detecting cgroup driver to use...
I0223 22:21:33.765298 80620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0223 22:21:33.783055 80620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0223 22:21:33.783552 80620 command_runner.go:130] > [Unit]
I0223 22:21:33.783568 80620 command_runner.go:130] > Description=Docker Application Container Engine
I0223 22:21:33.783574 80620 command_runner.go:130] > Documentation=https://docs.docker.com
I0223 22:21:33.783579 80620 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0223 22:21:33.783584 80620 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0223 22:21:33.783589 80620 command_runner.go:130] > StartLimitBurst=3
I0223 22:21:33.783595 80620 command_runner.go:130] > StartLimitIntervalSec=60
I0223 22:21:33.783598 80620 command_runner.go:130] > [Service]
I0223 22:21:33.783603 80620 command_runner.go:130] > Type=notify
I0223 22:21:33.783607 80620 command_runner.go:130] > Restart=on-failure
I0223 22:21:33.783614 80620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0223 22:21:33.783625 80620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0223 22:21:33.783631 80620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0223 22:21:33.783640 80620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0223 22:21:33.783647 80620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0223 22:21:33.783653 80620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0223 22:21:33.783660 80620 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0223 22:21:33.783668 80620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0223 22:21:33.783674 80620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0223 22:21:33.783678 80620 command_runner.go:130] > ExecStart=
I0223 22:21:33.783691 80620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0223 22:21:33.783696 80620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0223 22:21:33.783702 80620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0223 22:21:33.783708 80620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0223 22:21:33.783712 80620 command_runner.go:130] > LimitNOFILE=infinity
I0223 22:21:33.783715 80620 command_runner.go:130] > LimitNPROC=infinity
I0223 22:21:33.783719 80620 command_runner.go:130] > LimitCORE=infinity
I0223 22:21:33.783724 80620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0223 22:21:33.783728 80620 command_runner.go:130] > # Only systemd 226 and above support this version.
I0223 22:21:33.783733 80620 command_runner.go:130] > TasksMax=infinity
I0223 22:21:33.783736 80620 command_runner.go:130] > TimeoutStartSec=0
I0223 22:21:33.783742 80620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0223 22:21:33.783746 80620 command_runner.go:130] > Delegate=yes
I0223 22:21:33.783751 80620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0223 22:21:33.783755 80620 command_runner.go:130] > KillMode=process
I0223 22:21:33.783758 80620 command_runner.go:130] > [Install]
I0223 22:21:33.783765 80620 command_runner.go:130] > WantedBy=multi-user.target
I0223 22:21:33.784203 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0223 22:21:33.800310 80620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0223 22:21:33.820089 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0223 22:21:33.831934 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 22:21:33.843320 80620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0223 22:21:33.870509 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 22:21:33.882768 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 22:21:33.898405 80620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0223 22:21:33.898433 80620 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0223 22:21:33.898700 80620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0223 22:21:33.998916 80620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0223 22:21:34.101490 80620 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0223 22:21:34.101526 80620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0223 22:21:34.117559 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:21:34.221898 80620 ssh_runner.go:195] Run: sudo systemctl restart docker
I0223 22:21:35.643194 80620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.421256026s)
I0223 22:21:35.643291 80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0223 22:21:35.759716 80620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0223 22:21:35.863224 80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0223 22:21:35.965951 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:21:36.072240 80620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0223 22:21:36.092427 80620 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0223 22:21:36.092508 80620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0223 22:21:36.104108 80620 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0223 22:21:36.104128 80620 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0223 22:21:36.104134 80620 command_runner.go:130] > Device: 16h/22d Inode: 814 Links: 1
I0223 22:21:36.104143 80620 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0223 22:21:36.104156 80620 command_runner.go:130] > Access: 2023-02-23 22:21:36.038985633 +0000
I0223 22:21:36.104168 80620 command_runner.go:130] > Modify: 2023-02-23 22:21:36.038985633 +0000
I0223 22:21:36.104180 80620 command_runner.go:130] > Change: 2023-02-23 22:21:36.041985633 +0000
I0223 22:21:36.104189 80620 command_runner.go:130] > Birth: -
I0223 22:21:36.104213 80620 start.go:553] Will wait 60s for crictl version
I0223 22:21:36.104260 80620 ssh_runner.go:195] Run: which crictl
I0223 22:21:36.110223 80620 command_runner.go:130] > /usr/bin/crictl
I0223 22:21:36.110588 80620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0223 22:21:36.185549 80620 command_runner.go:130] > Version: 0.1.0
I0223 22:21:36.185577 80620 command_runner.go:130] > RuntimeName: docker
I0223 22:21:36.185585 80620 command_runner.go:130] > RuntimeVersion: 20.10.23
I0223 22:21:36.185593 80620 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0223 22:21:36.185626 80620 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0223 22:21:36.185698 80620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 22:21:36.217919 80620 command_runner.go:130] > 20.10.23
I0223 22:21:36.219196 80620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 22:21:36.248973 80620 command_runner.go:130] > 20.10.23
I0223 22:21:36.253095 80620 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0223 22:21:36.253136 80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
I0223 22:21:36.255830 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:36.256233 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:36.256260 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:36.256492 80620 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0223 22:21:36.260126 80620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 22:21:36.272218 80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 22:21:36.272269 80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 22:21:36.294497 80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0223 22:21:36.294518 80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0223 22:21:36.294523 80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0223 22:21:36.294528 80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0223 22:21:36.294532 80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0223 22:21:36.294536 80620 command_runner.go:130] > registry.k8s.io/pause:3.9
I0223 22:21:36.294541 80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
I0223 22:21:36.294546 80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0223 22:21:36.294550 80620 command_runner.go:130] > registry.k8s.io/pause:3.6
I0223 22:21:36.294554 80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0223 22:21:36.294558 80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0223 22:21:36.295537 80620 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0223 22:21:36.295553 80620 docker.go:560] Images already preloaded, skipping extraction
I0223 22:21:36.295600 80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 22:21:36.317087 80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0223 22:21:36.317104 80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0223 22:21:36.317109 80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0223 22:21:36.317114 80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0223 22:21:36.317119 80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0223 22:21:36.317123 80620 command_runner.go:130] > registry.k8s.io/pause:3.9
I0223 22:21:36.317127 80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
I0223 22:21:36.317133 80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0223 22:21:36.317137 80620 command_runner.go:130] > registry.k8s.io/pause:3.6
I0223 22:21:36.317142 80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0223 22:21:36.317149 80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0223 22:21:36.318116 80620 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0223 22:21:36.318131 80620 cache_images.go:84] Images are preloaded, skipping loading
I0223 22:21:36.318198 80620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0223 22:21:36.351288 80620 command_runner.go:130] > cgroupfs
I0223 22:21:36.352347 80620 cni.go:84] Creating CNI manager for ""
I0223 22:21:36.352366 80620 cni.go:136] 3 nodes found, recommending kindnet
I0223 22:21:36.352384 80620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0223 22:21:36.352404 80620 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-773885 NodeName:multinode-773885 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0223 22:21:36.352535 80620 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.240
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-773885"
kubeletExtraArgs:
node-ip: 192.168.39.240
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0223 22:21:36.352608 80620 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-773885 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0223 22:21:36.352654 80620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0223 22:21:36.361734 80620 command_runner.go:130] > kubeadm
I0223 22:21:36.361745 80620 command_runner.go:130] > kubectl
I0223 22:21:36.361749 80620 command_runner.go:130] > kubelet
I0223 22:21:36.361984 80620 binaries.go:44] Found k8s binaries, skipping transfer
I0223 22:21:36.362045 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0223 22:21:36.369631 80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
I0223 22:21:36.384815 80620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0223 22:21:36.399471 80620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
I0223 22:21:36.414791 80620 ssh_runner.go:195] Run: grep 192.168.39.240 control-plane.minikube.internal$ /etc/hosts
I0223 22:21:36.418133 80620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.240 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 22:21:36.429567 80620 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885 for IP: 192.168.39.240
I0223 22:21:36.429596 80620 certs.go:186] acquiring lock for shared ca certs: {Name:mkb47a35d7b33f6ba829c92dc16cfaf70cb716c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 22:21:36.429732 80620 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key
I0223 22:21:36.429768 80620 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key
I0223 22:21:36.429863 80620 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key
I0223 22:21:36.429933 80620 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key.ac2ca5a7
I0223 22:21:36.429971 80620 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key
I0223 22:21:36.429982 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0223 22:21:36.429999 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0223 22:21:36.430009 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0223 22:21:36.430023 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0223 22:21:36.430035 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0223 22:21:36.430047 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0223 22:21:36.430058 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0223 22:21:36.430070 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0223 22:21:36.430120 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem (1338 bytes)
W0223 22:21:36.430145 80620 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927_empty.pem, impossibly tiny 0 bytes
I0223 22:21:36.430155 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem (1675 bytes)
I0223 22:21:36.430178 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem (1078 bytes)
I0223 22:21:36.430200 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem (1123 bytes)
I0223 22:21:36.430224 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem (1671 bytes)
I0223 22:21:36.430265 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem (1708 bytes)
I0223 22:21:36.430293 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /usr/share/ca-certificates/669272.pem
I0223 22:21:36.430307 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0223 22:21:36.430319 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem -> /usr/share/ca-certificates/66927.pem
I0223 22:21:36.430835 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0223 22:21:36.452666 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0223 22:21:36.474354 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0223 22:21:36.496347 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0223 22:21:36.518192 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0223 22:21:36.539742 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0223 22:21:36.561567 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0223 22:21:36.582936 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0223 22:21:36.605667 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /usr/share/ca-certificates/669272.pem (1708 bytes)
I0223 22:21:36.627349 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0223 22:21:36.649138 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem --> /usr/share/ca-certificates/66927.pem (1338 bytes)
I0223 22:21:36.670645 80620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0223 22:21:36.685674 80620 ssh_runner.go:195] Run: openssl version
I0223 22:21:36.690629 80620 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0223 22:21:36.690924 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/66927.pem && ln -fs /usr/share/ca-certificates/66927.pem /etc/ssl/certs/66927.pem"
I0223 22:21:36.699754 80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/66927.pem
I0223 22:21:36.703759 80620 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/66927.pem
I0223 22:21:36.704095 80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/66927.pem
I0223 22:21:36.704128 80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/66927.pem
I0223 22:21:36.709182 80620 command_runner.go:130] > 51391683
I0223 22:21:36.709238 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/66927.pem /etc/ssl/certs/51391683.0"
I0223 22:21:36.718122 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669272.pem && ln -fs /usr/share/ca-certificates/669272.pem /etc/ssl/certs/669272.pem"
I0223 22:21:36.726789 80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669272.pem
I0223 22:21:36.730766 80620 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/669272.pem
I0223 22:21:36.730841 80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/669272.pem
I0223 22:21:36.730885 80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669272.pem
I0223 22:21:36.735795 80620 command_runner.go:130] > 3ec20f2e
I0223 22:21:36.736176 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/669272.pem /etc/ssl/certs/3ec20f2e.0"
I0223 22:21:36.745026 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0223 22:21:36.753682 80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0223 22:21:36.757609 80620 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
I0223 22:21:36.757830 80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
I0223 22:21:36.757864 80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0223 22:21:36.762876 80620 command_runner.go:130] > b5213941
I0223 22:21:36.762930 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0223 22:21:36.771746 80620 kubeadm.go:401] StartCluster: {Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMn
etPath: StaticIP:}
I0223 22:21:36.771889 80620 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0223 22:21:36.795673 80620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0223 22:21:36.804158 80620 command_runner.go:130] > /var/lib/kubelet/config.yaml
I0223 22:21:36.804177 80620 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
I0223 22:21:36.804208 80620 command_runner.go:130] > /var/lib/minikube/etcd:
I0223 22:21:36.804223 80620 command_runner.go:130] > member
I0223 22:21:36.804253 80620 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0223 22:21:36.804270 80620 kubeadm.go:633] restartCluster start
I0223 22:21:36.804326 80620 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0223 22:21:36.812345 80620 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0223 22:21:36.812718 80620 kubeconfig.go:135] verify returned: extract IP: "multinode-773885" does not appear in /home/jenkins/minikube-integration/15909-59858/kubeconfig
I0223 22:21:36.812798 80620 kubeconfig.go:146] "multinode-773885" context is missing from /home/jenkins/minikube-integration/15909-59858/kubeconfig - will repair!
I0223 22:21:36.813094 80620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-59858/kubeconfig: {Name:mkb3ee8537c1c29485268d18a34139db6a7d5ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 22:21:36.813506 80620 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15909-59858/kubeconfig
I0223 22:21:36.813719 80620 kapi.go:59] client config for multinode-773885: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key", CAFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 22:21:36.814424 80620 cert_rotation.go:137] Starting client certificate rotation controller
I0223 22:21:36.814616 80620 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0223 22:21:36.822391 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:36.822434 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:36.832386 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:37.333153 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:37.333231 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:37.344298 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:37.832833 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:37.832931 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:37.843863 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:38.333039 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:38.333157 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:38.344397 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:38.833335 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:38.833418 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:38.844307 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:39.332585 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:39.332660 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:39.343665 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:39.833274 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:39.833358 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:39.844484 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:40.332983 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:40.333065 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:40.344099 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:40.832657 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:40.832750 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:40.843615 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:41.333154 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:41.333245 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:41.344059 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:41.832619 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:41.832703 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:41.843654 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:42.333248 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:42.333328 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:42.344533 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:42.833157 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:42.833256 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:42.843975 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:43.333351 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:43.333418 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:43.344740 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:43.832562 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:43.832672 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:43.843659 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:44.333327 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:44.333407 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:44.344578 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:44.833173 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:44.833245 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:44.844332 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:45.332909 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:45.333037 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:45.344107 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:45.832647 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:45.832732 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:45.843986 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:46.332538 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:46.332617 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:46.343428 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:46.833367 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:46.833455 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:46.844521 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:46.844541 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:46.844582 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:46.854411 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:46.854446 80620 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0223 22:21:46.854455 80620 kubeadm.go:1120] stopping kube-system containers ...
I0223 22:21:46.854520 80620 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0223 22:21:46.882631 80620 command_runner.go:130] > a31cf43457e0
I0223 22:21:46.882655 80620 command_runner.go:130] > b83daa4cdd8d
I0223 22:21:46.882661 80620 command_runner.go:130] > 75e472928e30
I0223 22:21:46.882666 80620 command_runner.go:130] > 20f2e353f8d4
I0223 22:21:46.882674 80620 command_runner.go:130] > f6b2b873cba9
I0223 22:21:46.882682 80620 command_runner.go:130] > 6becaf5c8640
I0223 22:21:46.882688 80620 command_runner.go:130] > a2a9a29b5a41
I0223 22:21:46.882694 80620 command_runner.go:130] > f284ce294fa0
I0223 22:21:46.882700 80620 command_runner.go:130] > 8d29ee663e61
I0223 22:21:46.882707 80620 command_runner.go:130] > baad115b76c6
I0223 22:21:46.882725 80620 command_runner.go:130] > 53723346fe3c
I0223 22:21:46.882735 80620 command_runner.go:130] > 6a41aad93299
I0223 22:21:46.882743 80620 command_runner.go:130] > 745d6ec7adf4
I0223 22:21:46.882750 80620 command_runner.go:130] > 979e703c6176
I0223 22:21:46.882757 80620 command_runner.go:130] > 3b6e6d975efa
I0223 22:21:46.882766 80620 command_runner.go:130] > 072b5f08a10f
I0223 22:21:46.882797 80620 docker.go:456] Stopping containers: [a31cf43457e0 b83daa4cdd8d 75e472928e30 20f2e353f8d4 f6b2b873cba9 6becaf5c8640 a2a9a29b5a41 f284ce294fa0 8d29ee663e61 baad115b76c6 53723346fe3c 6a41aad93299 745d6ec7adf4 979e703c6176 3b6e6d975efa 072b5f08a10f]
I0223 22:21:46.882868 80620 ssh_runner.go:195] Run: docker stop a31cf43457e0 b83daa4cdd8d 75e472928e30 20f2e353f8d4 f6b2b873cba9 6becaf5c8640 a2a9a29b5a41 f284ce294fa0 8d29ee663e61 baad115b76c6 53723346fe3c 6a41aad93299 745d6ec7adf4 979e703c6176 3b6e6d975efa 072b5f08a10f
I0223 22:21:46.908823 80620 command_runner.go:130] > a31cf43457e0
I0223 22:21:46.908844 80620 command_runner.go:130] > b83daa4cdd8d
I0223 22:21:46.908853 80620 command_runner.go:130] > 75e472928e30
I0223 22:21:46.908858 80620 command_runner.go:130] > 20f2e353f8d4
I0223 22:21:46.908865 80620 command_runner.go:130] > f6b2b873cba9
I0223 22:21:46.908870 80620 command_runner.go:130] > 6becaf5c8640
I0223 22:21:46.908876 80620 command_runner.go:130] > a2a9a29b5a41
I0223 22:21:46.909404 80620 command_runner.go:130] > f284ce294fa0
I0223 22:21:46.909419 80620 command_runner.go:130] > 8d29ee663e61
I0223 22:21:46.909424 80620 command_runner.go:130] > baad115b76c6
I0223 22:21:46.909441 80620 command_runner.go:130] > 53723346fe3c
I0223 22:21:46.909828 80620 command_runner.go:130] > 6a41aad93299
I0223 22:21:46.909847 80620 command_runner.go:130] > 745d6ec7adf4
I0223 22:21:46.909853 80620 command_runner.go:130] > 979e703c6176
I0223 22:21:46.909858 80620 command_runner.go:130] > 3b6e6d975efa
I0223 22:21:46.909864 80620 command_runner.go:130] > 072b5f08a10f
I0223 22:21:46.911025 80620 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0223 22:21:46.925825 80620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 22:21:46.933780 80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0223 22:21:46.933807 80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0223 22:21:46.933818 80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0223 22:21:46.933842 80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 22:21:46.934068 80620 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 22:21:46.934127 80620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0223 22:21:46.942292 80620 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0223 22:21:46.942311 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:47.060140 80620 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0223 22:21:47.060421 80620 command_runner.go:130] > [certs] Using existing ca certificate authority
I0223 22:21:47.060722 80620 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0223 22:21:47.061266 80620 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0223 22:21:47.061579 80620 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
I0223 22:21:47.062097 80620 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
I0223 22:21:47.062730 80620 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
I0223 22:21:47.063273 80620 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
I0223 22:21:47.063668 80620 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
I0223 22:21:47.064166 80620 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0223 22:21:47.064500 80620 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I0223 22:21:47.064789 80620 command_runner.go:130] > [certs] Using the existing "sa" key
I0223 22:21:47.066082 80620 command_runner.go:130] ! W0223 22:21:47.003599 1259 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:47.066190 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:47.118462 80620 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 22:21:47.207705 80620 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0223 22:21:47.310176 80620 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 22:21:47.491530 80620 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 22:21:47.570853 80620 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 22:21:47.573364 80620 command_runner.go:130] ! W0223 22:21:47.061082 1265 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:47.573502 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:47.637325 80620 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0223 22:21:47.638644 80620 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0223 22:21:47.638664 80620 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0223 22:21:47.751602 80620 command_runner.go:130] ! W0223 22:21:47.567753 1271 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:47.751640 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:47.811937 80620 command_runner.go:130] ! W0223 22:21:47.761774 1293 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:47.829349 80620 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0223 22:21:47.829375 80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 22:21:47.829384 80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 22:21:47.829392 80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 22:21:47.829573 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:47.919203 80620 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 22:21:47.922916 80620 command_runner.go:130] ! W0223 22:21:47.858650 1302 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:47.923089 80620 api_server.go:51] waiting for apiserver process to appear ...
I0223 22:21:47.923171 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:48.438055 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:48.938524 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:49.437773 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:49.938504 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:50.438625 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:50.455679 80620 command_runner.go:130] > 1675
I0223 22:21:50.456038 80620 api_server.go:71] duration metric: took 2.532952682s to wait for apiserver process to appear ...
I0223 22:21:50.456061 80620 api_server.go:87] waiting for apiserver healthz status ...
I0223 22:21:50.456073 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:50.456563 80620 api_server.go:268] stopped: https://192.168.39.240:8443/healthz: Get "https://192.168.39.240:8443/healthz": dial tcp 192.168.39.240:8443: connect: connection refused
I0223 22:21:50.957285 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:53.851413 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0223 22:21:53.851440 80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0223 22:21:53.957622 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:53.962959 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 22:21:53.962996 80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 22:21:54.457567 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:54.462593 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 22:21:54.462613 80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 22:21:54.957140 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:54.975573 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 22:21:54.975619 80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 22:21:55.457159 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:55.468052 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
ok
I0223 22:21:55.468134 80620 round_trippers.go:463] GET https://192.168.39.240:8443/version
I0223 22:21:55.468145 80620 round_trippers.go:469] Request Headers:
I0223 22:21:55.468159 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:55.468173 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:55.478605 80620 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I0223 22:21:55.478631 80620 round_trippers.go:577] Response Headers:
I0223 22:21:55.478639 80620 round_trippers.go:580] Content-Length: 263
I0223 22:21:55.478645 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:55 GMT
I0223 22:21:55.478651 80620 round_trippers.go:580] Audit-Id: 0e80152b-56d5-4ba7-8d3d-ebf4ef092ec4
I0223 22:21:55.478656 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:55.478661 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:55.478667 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:55.478677 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:55.478720 80620 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.1",
"gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
"gitTreeState": "clean",
"buildDate": "2023-01-18T15:51:25Z",
"goVersion": "go1.19.5",
"compiler": "gc",
"platform": "linux/amd64"
}
I0223 22:21:55.478820 80620 api_server.go:140] control plane version: v1.26.1
I0223 22:21:55.478837 80620 api_server.go:130] duration metric: took 5.022769855s to wait for apiserver health ...
I0223 22:21:55.478847 80620 cni.go:84] Creating CNI manager for ""
I0223 22:21:55.478864 80620 cni.go:136] 3 nodes found, recommending kindnet
I0223 22:21:55.481215 80620 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0223 22:21:55.482654 80620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0223 22:21:55.487827 80620 command_runner.go:130] > File: /opt/cni/bin/portmap
I0223 22:21:55.487850 80620 command_runner.go:130] > Size: 2798344 Blocks: 5472 IO Block: 4096 regular file
I0223 22:21:55.487860 80620 command_runner.go:130] > Device: 11h/17d Inode: 3542 Links: 1
I0223 22:21:55.487870 80620 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0223 22:21:55.487881 80620 command_runner.go:130] > Access: 2023-02-23 22:21:25.431985633 +0000
I0223 22:21:55.487897 80620 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
I0223 22:21:55.487905 80620 command_runner.go:130] > Change: 2023-02-23 22:21:23.668985633 +0000
I0223 22:21:55.487910 80620 command_runner.go:130] > Birth: -
I0223 22:21:55.488315 80620 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
I0223 22:21:55.488335 80620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0223 22:21:55.519404 80620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0223 22:21:56.635297 80620 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0223 22:21:56.642116 80620 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0223 22:21:56.645709 80620 command_runner.go:130] > serviceaccount/kindnet unchanged
I0223 22:21:56.664280 80620 command_runner.go:130] > daemonset.apps/kindnet configured
I0223 22:21:56.666573 80620 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.147136699s)
I0223 22:21:56.666612 80620 system_pods.go:43] waiting for kube-system pods to appear ...
I0223 22:21:56.666717 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:21:56.666728 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.666739 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.666748 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.670034 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:21:56.670049 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.670056 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.670062 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.670081 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.670087 80620 round_trippers.go:580] Audit-Id: 03e54a77-0840-4896-9a52-5cdd73109000
I0223 22:21:56.670100 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.670111 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.671358 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"742"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"408","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82574 chars]
I0223 22:21:56.675255 80620 system_pods.go:59] 12 kube-system pods found
I0223 22:21:56.675279 80620 system_pods.go:61] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
I0223 22:21:56.675286 80620 system_pods.go:61] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0223 22:21:56.675291 80620 system_pods.go:61] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
I0223 22:21:56.675295 80620 system_pods.go:61] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
I0223 22:21:56.675316 80620 system_pods.go:61] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
I0223 22:21:56.675325 80620 system_pods.go:61] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
I0223 22:21:56.675337 80620 system_pods.go:61] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0223 22:21:56.675345 80620 system_pods.go:61] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
I0223 22:21:56.675349 80620 system_pods.go:61] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
I0223 22:21:56.675356 80620 system_pods.go:61] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
I0223 22:21:56.675361 80620 system_pods.go:61] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0223 22:21:56.675367 80620 system_pods.go:61] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
I0223 22:21:56.675372 80620 system_pods.go:74] duration metric: took 8.754325ms to wait for pod list to return data ...
I0223 22:21:56.675385 80620 node_conditions.go:102] verifying NodePressure condition ...
I0223 22:21:56.675430 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
I0223 22:21:56.675437 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.675444 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.675451 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.680543 80620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0223 22:21:56.680557 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.680564 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.680569 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.680577 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.680582 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.680589 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.680597 80620 round_trippers.go:580] Audit-Id: e86d112e-250e-4963-a6fb-b8fd3c902f59
I0223 22:21:56.681128 80620 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"742"},"items":[{"metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16319 chars]
I0223 22:21:56.681878 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:21:56.681909 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:21:56.681918 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:21:56.681922 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:21:56.681926 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:21:56.681932 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:21:56.681938 80620 node_conditions.go:105] duration metric: took 6.549163ms to run NodePressure ...
I0223 22:21:56.681958 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:56.825426 80620 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0223 22:21:56.885114 80620 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0223 22:21:56.886787 80620 command_runner.go:130] ! W0223 22:21:56.690228 2212 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:56.886832 80620 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0223 22:21:56.886942 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
I0223 22:21:56.886954 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.886965 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.886975 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.889503 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:56.889525 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.889536 80620 round_trippers.go:580] Audit-Id: a9179ace-0f8b-41d7-acc9-15a5468f5431
I0223 22:21:56.889545 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.889552 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.889561 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.889569 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.889582 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.890569 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"740","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29273 chars]
I0223 22:21:56.891994 80620 kubeadm.go:784] kubelet initialised
I0223 22:21:56.892020 80620 kubeadm.go:785] duration metric: took 5.174392ms waiting for restarted kubelet to initialise ...
I0223 22:21:56.892029 80620 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 22:21:56.892094 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:21:56.892105 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.892115 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.892126 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.898216 80620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0223 22:21:56.898231 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.898240 80620 round_trippers.go:580] Audit-Id: 0cbc9df8-5ddc-4405-a649-09747f9c7e5c
I0223 22:21:56.898250 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.898260 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.898268 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.898280 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.898290 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.899125 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"408","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82574 chars]
I0223 22:21:56.901600 80620 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
I0223 22:21:56.901668 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:21:56.901680 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.901690 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.901697 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.906528 80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0223 22:21:56.906543 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.906552 80620 round_trippers.go:580] Audit-Id: c55b1693-f442-4306-a674-87f938885743
I0223 22:21:56.906561 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.906571 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.906580 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.906589 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.906602 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.906875 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:21:56.907276 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:56.907287 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.907294 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.907312 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.916593 80620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0223 22:21:56.916608 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.916616 80620 round_trippers.go:580] Audit-Id: 3b9497a6-fa4c-472e-b004-b0b6906e7a7f
I0223 22:21:56.916625 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.916634 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.916644 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.916652 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.916662 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.916802 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:56.917117 80620 pod_ready.go:97] node "multinode-773885" hosting pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.917132 80620 pod_ready.go:81] duration metric: took 15.512217ms waiting for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
E0223 22:21:56.917139 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.917145 80620 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:21:56.917197 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773885
I0223 22:21:56.917206 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.917213 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.917219 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.919079 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:21:56.919091 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.919097 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.919103 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.919108 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.919114 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.919120 80620 round_trippers.go:580] Audit-Id: 143d00d2-5e6b-44b2-a517-c658e2dc5a9f
I0223 22:21:56.919129 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.919346 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"740","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6289 chars]
I0223 22:21:56.919779 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:56.919793 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.919802 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.919808 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.921391 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:21:56.921406 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.921413 80620 round_trippers.go:580] Audit-Id: 9f5eac9e-078a-4143-9d6d-1b1de0a3102a
I0223 22:21:56.921423 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.921431 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.921440 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.921450 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.921460 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.921618 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:56.921957 80620 pod_ready.go:97] node "multinode-773885" hosting pod "etcd-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.921972 80620 pod_ready.go:81] duration metric: took 4.821003ms waiting for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
E0223 22:21:56.921981 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "etcd-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.921998 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:21:56.922055 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773885
I0223 22:21:56.922065 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.922076 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.922089 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.925010 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:56.925024 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.925033 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.925043 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.925052 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.925061 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.925070 80620 round_trippers.go:580] Audit-Id: 422d48f0-48d6-4c16-8b22-40f26357fc34
I0223 22:21:56.925075 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.925261 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773885","namespace":"kube-system","uid":"f9cbb81f-f7c6-47e7-9e3c-393680d5ee52","resourceVersion":"282","creationTimestamp":"2023-02-23T22:17:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.240:8443","kubernetes.io/config.hash":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.mirror":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.seen":"2023-02-23T22:17:25.440360314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
I0223 22:21:56.925639 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:56.925652 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.925659 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.925666 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.927337 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:21:56.927356 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.927365 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.927373 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.927382 80620 round_trippers.go:580] Audit-Id: 020b9a46-ef43-4607-90e4-5d3e9e7d1a08
I0223 22:21:56.927392 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.927401 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.927413 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.927579 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:56.927921 80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-apiserver-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.927940 80620 pod_ready.go:81] duration metric: took 5.928725ms waiting for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
E0223 22:21:56.927950 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-apiserver-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.927957 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:21:56.928048 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773885
I0223 22:21:56.928062 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.928072 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.928082 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.930936 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:56.930950 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.930956 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.930961 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.930968 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.930982 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.930995 80620 round_trippers.go:580] Audit-Id: 00aa01ac-5a84-4085-b3b5-f5f6d06fbe47
I0223 22:21:56.931005 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.931218 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773885","namespace":"kube-system","uid":"df36fee9-6048-45f6-b17a-679c2c9e3daf","resourceVersion":"739","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.mirror":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.seen":"2023-02-23T22:17:38.195450048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7424 chars]
I0223 22:21:57.067070 80620 request.go:622] Waited for 135.338555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:57.067135 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:57.067145 80620 round_trippers.go:469] Request Headers:
I0223 22:21:57.067163 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:57.067176 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:57.070119 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:57.070137 80620 round_trippers.go:577] Response Headers:
I0223 22:21:57.070143 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:57.070149 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:57 GMT
I0223 22:21:57.070155 80620 round_trippers.go:580] Audit-Id: 5d3402dd-3874-4131-9278-561b1ef77762
I0223 22:21:57.070161 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:57.070167 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:57.070178 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:57.070297 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:57.070668 80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-controller-manager-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:57.070691 80620 pod_ready.go:81] duration metric: took 142.727116ms waiting for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
E0223 22:21:57.070704 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-controller-manager-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:57.070713 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
I0223 22:21:57.267166 80620 request.go:622] Waited for 196.388978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
I0223 22:21:57.267229 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
I0223 22:21:57.267239 80620 round_trippers.go:469] Request Headers:
I0223 22:21:57.267252 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:57.267264 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:57.269968 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:57.269991 80620 round_trippers.go:577] Response Headers:
I0223 22:21:57.270000 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:57.270012 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:57 GMT
I0223 22:21:57.270084 80620 round_trippers.go:580] Audit-Id: 27049171-e30c-4ab9-a6ed-77da398a4856
I0223 22:21:57.270104 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:57.270113 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:57.270123 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:57.270261 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5d5vn","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3dfcd7d-3514-4286-93e9-f51f9f91c2d7","resourceVersion":"491","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
I0223 22:21:57.467146 80620 request.go:622] Waited for 196.375195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
I0223 22:21:57.467201 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
I0223 22:21:57.467207 80620 round_trippers.go:469] Request Headers:
I0223 22:21:57.467216 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:57.467235 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:57.469655 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:57.469680 80620 round_trippers.go:577] Response Headers:
I0223 22:21:57.469690 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:57.469716 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:57 GMT
I0223 22:21:57.469727 80620 round_trippers.go:580] Audit-Id: d420f22f-77bb-4122-826c-40660cb2d6fb
I0223 22:21:57.469734 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:57.469741 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:57.469749 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:57.469921 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m02","uid":"6657df38-0b72-4f36-a536-d4626cf22c9b","resourceVersion":"560","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
I0223 22:21:57.470230 80620 pod_ready.go:92] pod "kube-proxy-5d5vn" in "kube-system" namespace has status "Ready":"True"
I0223 22:21:57.470242 80620 pod_ready.go:81] duration metric: took 399.521519ms waiting for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
I0223 22:21:57.470250 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
I0223 22:21:57.667697 80620 request.go:622] Waited for 197.385632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
I0223 22:21:57.667766 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
I0223 22:21:57.667771 80620 round_trippers.go:469] Request Headers:
I0223 22:21:57.667778 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:57.667785 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:57.670278 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:57.670298 80620 round_trippers.go:577] Response Headers:
I0223 22:21:57.670308 80620 round_trippers.go:580] Audit-Id: 0128213a-339a-470c-989d-e7b486abebe1
I0223 22:21:57.670316 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:57.670324 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:57.670333 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:57.670342 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:57.670351 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:57 GMT
I0223 22:21:57.670879 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mdjks","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1cb3f4c-effa-4f0e-bbaa-ff792325a571","resourceVersion":"377","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
I0223 22:21:57.867695 80620 request.go:622] Waited for 196.388162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:57.867765 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:57.867770 80620 round_trippers.go:469] Request Headers:
I0223 22:21:57.867778 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:57.867784 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:57.870409 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:57.870431 80620 round_trippers.go:577] Response Headers:
I0223 22:21:57.870442 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:57.870452 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:57.870460 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:57.870466 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:57 GMT
I0223 22:21:57.870474 80620 round_trippers.go:580] Audit-Id: a53d6f4e-2730-4846-9147-87d2b5b1bc56
I0223 22:21:57.870483 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:57.870627 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:57.870935 80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-proxy-mdjks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:57.870951 80620 pod_ready.go:81] duration metric: took 400.694245ms waiting for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
E0223 22:21:57.870962 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-proxy-mdjks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:57.870970 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
I0223 22:21:58.067390 80620 request.go:622] Waited for 196.340619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
I0223 22:21:58.067527 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
I0223 22:21:58.067575 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.067593 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.067604 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.071162 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:21:58.071181 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.071191 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.071199 80620 round_trippers.go:580] Audit-Id: 49f82db0-63aa-4950-9457-03eeb73d1c6f
I0223 22:21:58.071207 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.071215 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.071223 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.071231 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.071517 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psgdt","generateName":"kube-proxy-","namespace":"kube-system","uid":"57d8204d-38f2-413f-8855-237db379cd27","resourceVersion":"721","creationTimestamp":"2023-02-23T22:19:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:19:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I0223 22:21:58.267044 80620 request.go:622] Waited for 195.100843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
I0223 22:21:58.267131 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
I0223 22:21:58.267138 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.267150 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.267161 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.269786 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:58.269805 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.269812 80620 round_trippers.go:580] Audit-Id: 28398178-6b4f-4ced-bd50-76b0a4e432c0
I0223 22:21:58.269818 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.269823 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.269828 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.269833 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.269846 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.270022 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m03","uid":"22181ea8-5030-450a-9927-f28a8241ef6a","resourceVersion":"732","creationTimestamp":"2023-02-23T22:20:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4329 chars]
I0223 22:21:58.270353 80620 pod_ready.go:92] pod "kube-proxy-psgdt" in "kube-system" namespace has status "Ready":"True"
I0223 22:21:58.270367 80620 pod_ready.go:81] duration metric: took 399.384993ms waiting for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
I0223 22:21:58.270378 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:21:58.467272 80620 request.go:622] Waited for 196.812846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
I0223 22:21:58.467358 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
I0223 22:21:58.467365 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.467376 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.467390 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.470141 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:58.470169 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.470179 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.470188 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.470195 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.470204 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.470213 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.470221 80620 round_trippers.go:580] Audit-Id: e5044b8f-aa40-4729-93fe-c25c71ca551c
I0223 22:21:58.470349 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773885","namespace":"kube-system","uid":"ecc1fa39-40dc-4d57-be46-8e9a01431180","resourceVersion":"742","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.mirror":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.seen":"2023-02-23T22:17:38.195431871Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5136 chars]
I0223 22:21:58.667199 80620 request.go:622] Waited for 196.342723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:58.667264 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:58.667275 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.667288 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.667318 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.669825 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:58.669849 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.669860 80620 round_trippers.go:580] Audit-Id: 8c1fc862-a3d1-4b08-b8c2-f41fa6fd3cd6
I0223 22:21:58.669869 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.669877 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.669885 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.669899 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.669910 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.670129 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:58.670496 80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-scheduler-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:58.670517 80620 pod_ready.go:81] duration metric: took 400.130245ms waiting for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
E0223 22:21:58.670528 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-scheduler-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:58.670539 80620 pod_ready.go:38] duration metric: took 1.778499138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 22:21:58.670563 80620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0223 22:21:58.684600 80620 command_runner.go:130] > -16
I0223 22:21:58.684633 80620 ops.go:34] apiserver oom_adj: -16
I0223 22:21:58.684642 80620 kubeadm.go:637] restartCluster took 21.880365731s
I0223 22:21:58.684651 80620 kubeadm.go:403] StartCluster complete in 21.912911073s
I0223 22:21:58.684672 80620 settings.go:142] acquiring lock: {Name:mk906211444ec0c60982da29f94c92fb57d72ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 22:21:58.684774 80620 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15909-59858/kubeconfig
I0223 22:21:58.685563 80620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-59858/kubeconfig: {Name:mkb3ee8537c1c29485268d18a34139db6a7d5ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 22:21:58.685892 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0223 22:21:58.686005 80620 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0223 22:21:58.686136 80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 22:21:58.686171 80620 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15909-59858/kubeconfig
I0223 22:21:58.687964 80620 out.go:177] * Enabled addons:
I0223 22:21:58.686508 80620 kapi.go:59] client config for multinode-773885: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key", CAFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 22:21:58.689318 80620 addons.go:492] enable addons completed in 3.316295ms: enabled=[]
I0223 22:21:58.689636 80620 round_trippers.go:463] GET https://192.168.39.240:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0223 22:21:58.689653 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.689665 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.689674 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.692405 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:58.692425 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.692435 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.692448 80620 round_trippers.go:580] Audit-Id: 2916b551-1504-4ee6-8f0b-8bb9b49c72fe
I0223 22:21:58.692457 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.692474 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.692486 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.692499 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.692512 80620 round_trippers.go:580] Content-Length: 291
I0223 22:21:58.692541 80620 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"88095e59-4c47-4f2e-9af0-397e7cc508de","resourceVersion":"743","creationTimestamp":"2023-02-23T22:17:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0223 22:21:58.692706 80620 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-773885" context rescaled to 1 replicas
I0223 22:21:58.692739 80620 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0223 22:21:58.694468 80620 out.go:177] * Verifying Kubernetes components...
I0223 22:21:58.696081 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 22:21:58.815357 80620 command_runner.go:130] > apiVersion: v1
I0223 22:21:58.815388 80620 command_runner.go:130] > data:
I0223 22:21:58.815395 80620 command_runner.go:130] > Corefile: |
I0223 22:21:58.815401 80620 command_runner.go:130] > .:53 {
I0223 22:21:58.815406 80620 command_runner.go:130] > log
I0223 22:21:58.815414 80620 command_runner.go:130] > errors
I0223 22:21:58.815423 80620 command_runner.go:130] > health {
I0223 22:21:58.815430 80620 command_runner.go:130] > lameduck 5s
I0223 22:21:58.815435 80620 command_runner.go:130] > }
I0223 22:21:58.815443 80620 command_runner.go:130] > ready
I0223 22:21:58.815455 80620 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0223 22:21:58.815461 80620 command_runner.go:130] > pods insecure
I0223 22:21:58.815470 80620 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0223 22:21:58.815479 80620 command_runner.go:130] > ttl 30
I0223 22:21:58.815485 80620 command_runner.go:130] > }
I0223 22:21:58.815495 80620 command_runner.go:130] > prometheus :9153
I0223 22:21:58.815501 80620 command_runner.go:130] > hosts {
I0223 22:21:58.815510 80620 command_runner.go:130] > 192.168.39.1 host.minikube.internal
I0223 22:21:58.815517 80620 command_runner.go:130] > fallthrough
I0223 22:21:58.815526 80620 command_runner.go:130] > }
I0223 22:21:58.815537 80620 command_runner.go:130] > forward . /etc/resolv.conf {
I0223 22:21:58.815545 80620 command_runner.go:130] > max_concurrent 1000
I0223 22:21:58.815553 80620 command_runner.go:130] > }
I0223 22:21:58.815563 80620 command_runner.go:130] > cache 30
I0223 22:21:58.815574 80620 command_runner.go:130] > loop
I0223 22:21:58.815583 80620 command_runner.go:130] > reload
I0223 22:21:58.815595 80620 command_runner.go:130] > loadbalance
I0223 22:21:58.815605 80620 command_runner.go:130] > }
I0223 22:21:58.815614 80620 command_runner.go:130] > kind: ConfigMap
I0223 22:21:58.815623 80620 command_runner.go:130] > metadata:
I0223 22:21:58.815631 80620 command_runner.go:130] > creationTimestamp: "2023-02-23T22:17:37Z"
I0223 22:21:58.815641 80620 command_runner.go:130] > name: coredns
I0223 22:21:58.815651 80620 command_runner.go:130] > namespace: kube-system
I0223 22:21:58.815660 80620 command_runner.go:130] > resourceVersion: "360"
I0223 22:21:58.815671 80620 command_runner.go:130] > uid: 79632023-f720-4e05-a063-411c24789887
I0223 22:21:58.818640 80620 node_ready.go:35] waiting up to 6m0s for node "multinode-773885" to be "Ready" ...
I0223 22:21:58.818784 80620 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0223 22:21:58.866997 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:58.867022 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.867036 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.867046 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.869514 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:58.869542 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.869553 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.869562 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.869568 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.869573 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.869579 80620 round_trippers.go:580] Audit-Id: ef8ca951-03a3-4673-b3b0-d6e949e3aba1
I0223 22:21:58.869586 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.869696 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:59.370801 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:59.370828 80620 round_trippers.go:469] Request Headers:
I0223 22:21:59.370840 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:59.370850 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:59.373237 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:59.373263 80620 round_trippers.go:577] Response Headers:
I0223 22:21:59.373275 80620 round_trippers.go:580] Audit-Id: cc5c5f53-65a1-48f1-8d30-2983a96a1517
I0223 22:21:59.373284 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:59.373292 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:59.373301 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:59.373310 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:59.373320 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:59 GMT
I0223 22:21:59.373432 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:59.871104 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:59.871130 80620 round_trippers.go:469] Request Headers:
I0223 22:21:59.871142 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:59.871152 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:59.873824 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:59.873849 80620 round_trippers.go:577] Response Headers:
I0223 22:21:59.873860 80620 round_trippers.go:580] Audit-Id: a0c12052-13ba-4532-b2cb-ef0712468e2c
I0223 22:21:59.873868 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:59.873877 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:59.873890 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:59.873898 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:59.873910 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:59 GMT
I0223 22:21:59.874344 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:00.371108 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:00.371138 80620 round_trippers.go:469] Request Headers:
I0223 22:22:00.371150 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:00.371160 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:00.373796 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:00.373818 80620 round_trippers.go:577] Response Headers:
I0223 22:22:00.373826 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:00.373832 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:00.373837 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:00.373843 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:00 GMT
I0223 22:22:00.373849 80620 round_trippers.go:580] Audit-Id: 6d76f1af-c5ab-44d4-ac95-d4a732c54af0
I0223 22:22:00.373861 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:00.374155 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:00.870897 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:00.870933 80620 round_trippers.go:469] Request Headers:
I0223 22:22:00.870942 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:00.870951 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:00.873427 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:00.873451 80620 round_trippers.go:577] Response Headers:
I0223 22:22:00.873462 80620 round_trippers.go:580] Audit-Id: 494f6db1-2d29-4a14-be25-f5115f464c6c
I0223 22:22:00.873471 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:00.873485 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:00.873495 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:00.873504 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:00.873512 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:00 GMT
I0223 22:22:00.873654 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:00.874130 80620 node_ready.go:58] node "multinode-773885" has status "Ready":"False"
I0223 22:22:01.370246 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:01.370268 80620 round_trippers.go:469] Request Headers:
I0223 22:22:01.370279 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:01.370286 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:01.372742 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:01.372768 80620 round_trippers.go:577] Response Headers:
I0223 22:22:01.372779 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:01.372787 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:01.372796 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:01.372808 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:01 GMT
I0223 22:22:01.372816 80620 round_trippers.go:580] Audit-Id: d657d94b-1177-4e47-9c6a-10517add9c29
I0223 22:22:01.372827 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:01.372974 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:01.870635 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:01.870664 80620 round_trippers.go:469] Request Headers:
I0223 22:22:01.870672 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:01.870679 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:01.873350 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:01.873373 80620 round_trippers.go:577] Response Headers:
I0223 22:22:01.873386 80620 round_trippers.go:580] Audit-Id: 3aae1eee-a094-424f-bbd3-1cc775206a05
I0223 22:22:01.873395 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:01.873403 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:01.873410 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:01.873419 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:01.873428 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:01 GMT
I0223 22:22:01.873701 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:02.370356 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:02.370378 80620 round_trippers.go:469] Request Headers:
I0223 22:22:02.370386 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:02.370392 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:02.373961 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:02.373983 80620 round_trippers.go:577] Response Headers:
I0223 22:22:02.373992 80620 round_trippers.go:580] Audit-Id: 2d8ae255-30e7-495f-82a8-f977058510be
I0223 22:22:02.374000 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:02.374008 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:02.374018 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:02.374028 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:02.374041 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:02 GMT
I0223 22:22:02.374362 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:02.871107 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:02.871133 80620 round_trippers.go:469] Request Headers:
I0223 22:22:02.871148 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:02.871157 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:02.873653 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:02.873672 80620 round_trippers.go:577] Response Headers:
I0223 22:22:02.873680 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:02.873686 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:02.873691 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:02 GMT
I0223 22:22:02.873697 80620 round_trippers.go:580] Audit-Id: 88e3a2a0-3a44-456c-a122-9443f9691153
I0223 22:22:02.873706 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:02.873715 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:02.874022 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:02.874437 80620 node_ready.go:58] node "multinode-773885" has status "Ready":"False"
I0223 22:22:03.370842 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:03.370869 80620 round_trippers.go:469] Request Headers:
I0223 22:22:03.370886 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:03.370894 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:03.372889 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:03.372909 80620 round_trippers.go:577] Response Headers:
I0223 22:22:03.372916 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:03.372922 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:03 GMT
I0223 22:22:03.372928 80620 round_trippers.go:580] Audit-Id: 553e23aa-d7b4-4f46-b968-491b3c19b7a9
I0223 22:22:03.372934 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:03.372942 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:03.372954 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:03.373055 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:03.870742 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:03.870764 80620 round_trippers.go:469] Request Headers:
I0223 22:22:03.870773 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:03.870779 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:03.873449 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:03.873469 80620 round_trippers.go:577] Response Headers:
I0223 22:22:03.873476 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:03.873482 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:03.873487 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:03 GMT
I0223 22:22:03.873493 80620 round_trippers.go:580] Audit-Id: d10ccbbb-11df-43ab-9526-c648f4eb57ab
I0223 22:22:03.873499 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:03.873504 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:03.873699 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:04.370303 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:04.370324 80620 round_trippers.go:469] Request Headers:
I0223 22:22:04.370332 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:04.370339 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:04.372813 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:04.372839 80620 round_trippers.go:577] Response Headers:
I0223 22:22:04.372851 80620 round_trippers.go:580] Audit-Id: bdad9e22-9644-4e1c-8f6c-ae6fc5d4caf1
I0223 22:22:04.372861 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:04.372870 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:04.372879 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:04.372893 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:04.372902 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:04 GMT
I0223 22:22:04.373649 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:04.870293 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:04.870319 80620 round_trippers.go:469] Request Headers:
I0223 22:22:04.870327 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:04.870333 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:04.873111 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:04.873137 80620 round_trippers.go:577] Response Headers:
I0223 22:22:04.873148 80620 round_trippers.go:580] Audit-Id: 356034ea-3c99-4375-a746-070c2cc9db4c
I0223 22:22:04.873157 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:04.873164 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:04.873172 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:04.873182 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:04.873192 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:04 GMT
I0223 22:22:04.873417 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:04.873740 80620 node_ready.go:49] node "multinode-773885" has status "Ready":"True"
I0223 22:22:04.873759 80620 node_ready.go:38] duration metric: took 6.055088164s waiting for node "multinode-773885" to be "Ready" ...
I0223 22:22:04.873768 80620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 22:22:04.873821 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:22:04.873828 80620 round_trippers.go:469] Request Headers:
I0223 22:22:04.873836 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:04.873842 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:04.877171 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:04.877190 80620 round_trippers.go:577] Response Headers:
I0223 22:22:04.877199 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:04.877209 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:04.877217 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:04.877225 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:04.877234 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:04 GMT
I0223 22:22:04.877242 80620 round_trippers.go:580] Audit-Id: ea2e3ce7-5ec8-4de8-affe-00217b9f0f75
I0223 22:22:04.878185 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"788"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83657 chars]
I0223 22:22:04.880661 80620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
I0223 22:22:04.880721 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:04.880729 80620 round_trippers.go:469] Request Headers:
I0223 22:22:04.880736 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:04.880743 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:04.882620 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:04.882637 80620 round_trippers.go:577] Response Headers:
I0223 22:22:04.882643 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:04.882649 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:04 GMT
I0223 22:22:04.882654 80620 round_trippers.go:580] Audit-Id: b8c34b52-e089-4d20-abac-792cd26a154e
I0223 22:22:04.882660 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:04.882665 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:04.882671 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:04.882780 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:04.883130 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:04.883141 80620 round_trippers.go:469] Request Headers:
I0223 22:22:04.883148 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:04.883154 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:04.885545 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:04.885559 80620 round_trippers.go:577] Response Headers:
I0223 22:22:04.885566 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:04.885571 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:04.885577 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:04.885582 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:04 GMT
I0223 22:22:04.885590 80620 round_trippers.go:580] Audit-Id: a935859f-b8a0-4ddc-8ffe-b88f374b4617
I0223 22:22:04.885597 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:04.885668 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:05.386735 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:05.386762 80620 round_trippers.go:469] Request Headers:
I0223 22:22:05.386775 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:05.386785 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:05.389024 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:05.389044 80620 round_trippers.go:577] Response Headers:
I0223 22:22:05.389055 80620 round_trippers.go:580] Audit-Id: 5162732a-6a2d-4976-bd1a-d7a30dbd6874
I0223 22:22:05.389063 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:05.389070 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:05.389082 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:05.389095 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:05.389103 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:05 GMT
I0223 22:22:05.389223 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:05.389693 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:05.389706 80620 round_trippers.go:469] Request Headers:
I0223 22:22:05.389713 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:05.389722 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:05.391445 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:05.391462 80620 round_trippers.go:577] Response Headers:
I0223 22:22:05.391469 80620 round_trippers.go:580] Audit-Id: 152ffe10-665f-45a2-8a81-8746544ba57e
I0223 22:22:05.391475 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:05.391482 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:05.391491 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:05.391501 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:05.391511 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:05 GMT
I0223 22:22:05.391627 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:05.886225 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:05.886248 80620 round_trippers.go:469] Request Headers:
I0223 22:22:05.886257 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:05.886264 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:05.888353 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:05.888389 80620 round_trippers.go:577] Response Headers:
I0223 22:22:05.888399 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:05.888408 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:05 GMT
I0223 22:22:05.888417 80620 round_trippers.go:580] Audit-Id: cc5f0143-2508-446f-907a-56ab533f7430
I0223 22:22:05.888426 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:05.888438 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:05.888446 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:05.889024 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:05.889458 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:05.889469 80620 round_trippers.go:469] Request Headers:
I0223 22:22:05.889476 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:05.889484 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:05.891242 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:05.891257 80620 round_trippers.go:577] Response Headers:
I0223 22:22:05.891263 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:05.891269 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:05.891275 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:05.891283 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:05.891293 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:05 GMT
I0223 22:22:05.891319 80620 round_trippers.go:580] Audit-Id: ee3b00fc-914b-4eba-8a45-e4597d8f6d25
I0223 22:22:05.891627 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:06.386281 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:06.386303 80620 round_trippers.go:469] Request Headers:
I0223 22:22:06.386311 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:06.386326 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:06.388974 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:06.388992 80620 round_trippers.go:577] Response Headers:
I0223 22:22:06.388999 80620 round_trippers.go:580] Audit-Id: 220c9abc-71ea-4bf1-984a-8b6e023377f1
I0223 22:22:06.389014 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:06.389026 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:06.389038 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:06.389046 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:06.389052 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:06 GMT
I0223 22:22:06.389842 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:06.390308 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:06.390321 80620 round_trippers.go:469] Request Headers:
I0223 22:22:06.390328 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:06.390337 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:06.391935 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:06.391953 80620 round_trippers.go:577] Response Headers:
I0223 22:22:06.391962 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:06.391970 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:06.391980 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:06 GMT
I0223 22:22:06.391989 80620 round_trippers.go:580] Audit-Id: 7685b789-c707-4d17-88af-7145585bce78
I0223 22:22:06.391998 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:06.392010 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:06.392362 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:06.886127 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:06.886150 80620 round_trippers.go:469] Request Headers:
I0223 22:22:06.886159 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:06.886165 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:06.889975 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:06.890001 80620 round_trippers.go:577] Response Headers:
I0223 22:22:06.890013 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:06.890023 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:06.890035 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:06 GMT
I0223 22:22:06.890048 80620 round_trippers.go:580] Audit-Id: 87848966-24d5-45b3-a7aa-56f65410f508
I0223 22:22:06.890057 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:06.890070 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:06.890267 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:06.890721 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:06.890734 80620 round_trippers.go:469] Request Headers:
I0223 22:22:06.890741 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:06.890747 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:06.895655 80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0223 22:22:06.895674 80620 round_trippers.go:577] Response Headers:
I0223 22:22:06.895684 80620 round_trippers.go:580] Audit-Id: f054bb7d-1199-4b8d-b3f0-4c0274f1d63d
I0223 22:22:06.895693 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:06.895702 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:06.895713 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:06.895724 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:06.895736 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:06 GMT
I0223 22:22:06.896139 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:06.896420 80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
I0223 22:22:07.386841 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:07.386862 80620 round_trippers.go:469] Request Headers:
I0223 22:22:07.386871 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:07.386878 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:07.389998 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:07.390025 80620 round_trippers.go:577] Response Headers:
I0223 22:22:07.390036 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:07.390046 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:07.390054 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:07.390062 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:07 GMT
I0223 22:22:07.390070 80620 round_trippers.go:580] Audit-Id: d6b7ea92-112f-499d-a61b-86d8245e8558
I0223 22:22:07.390078 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:07.390244 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:07.390679 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:07.390690 80620 round_trippers.go:469] Request Headers:
I0223 22:22:07.390698 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:07.390704 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:07.392927 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:07.392948 80620 round_trippers.go:577] Response Headers:
I0223 22:22:07.392958 80620 round_trippers.go:580] Audit-Id: e7498617-1172-42fd-b07a-d2d628e52a21
I0223 22:22:07.392969 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:07.392988 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:07.393002 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:07.393011 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:07.393022 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:07 GMT
I0223 22:22:07.393607 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:07.886231 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:07.886254 80620 round_trippers.go:469] Request Headers:
I0223 22:22:07.886277 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:07.886284 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:07.889328 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:07.889351 80620 round_trippers.go:577] Response Headers:
I0223 22:22:07.889359 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:07.889366 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:07 GMT
I0223 22:22:07.889371 80620 round_trippers.go:580] Audit-Id: 996a8d26-ab61-4eb1-a206-c0fb32514e06
I0223 22:22:07.889377 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:07.889382 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:07.889388 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:07.889970 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:07.890413 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:07.890425 80620 round_trippers.go:469] Request Headers:
I0223 22:22:07.890432 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:07.890439 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:07.897920 80620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0223 22:22:07.897934 80620 round_trippers.go:577] Response Headers:
I0223 22:22:07.897941 80620 round_trippers.go:580] Audit-Id: 4221b7db-ff10-4443-aed5-78c6f7b9296c
I0223 22:22:07.897947 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:07.897953 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:07.897958 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:07.897966 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:07.897972 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:07 GMT
I0223 22:22:07.898379 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:08.386191 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:08.386213 80620 round_trippers.go:469] Request Headers:
I0223 22:22:08.386224 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:08.386234 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:08.388618 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:08.388637 80620 round_trippers.go:577] Response Headers:
I0223 22:22:08.388644 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:08.388652 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:08.388660 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:08.388668 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:08 GMT
I0223 22:22:08.388689 80620 round_trippers.go:580] Audit-Id: 9fd3f354-aaea-4470-b0a9-a62bb9cf4b81
I0223 22:22:08.388695 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:08.389016 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:08.389462 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:08.389474 80620 round_trippers.go:469] Request Headers:
I0223 22:22:08.389484 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:08.389493 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:08.391347 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:08.391366 80620 round_trippers.go:577] Response Headers:
I0223 22:22:08.391376 80620 round_trippers.go:580] Audit-Id: d2b922bc-cc07-4d6a-a919-5b81247f7675
I0223 22:22:08.391385 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:08.391396 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:08.391405 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:08.391414 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:08.391419 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:08 GMT
I0223 22:22:08.391692 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:08.886358 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:08.886387 80620 round_trippers.go:469] Request Headers:
I0223 22:22:08.886397 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:08.886403 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:08.889174 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:08.889200 80620 round_trippers.go:577] Response Headers:
I0223 22:22:08.889209 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:08 GMT
I0223 22:22:08.889215 80620 round_trippers.go:580] Audit-Id: 7d35bf13-e46b-4b70-b379-eef2287d1352
I0223 22:22:08.889220 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:08.889226 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:08.889231 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:08.889236 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:08.889437 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:08.889910 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:08.889923 80620 round_trippers.go:469] Request Headers:
I0223 22:22:08.889931 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:08.889937 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:08.892893 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:08.892908 80620 round_trippers.go:577] Response Headers:
I0223 22:22:08.892914 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:08 GMT
I0223 22:22:08.892919 80620 round_trippers.go:580] Audit-Id: c156c99d-e130-4f55-b4e3-14616a7ba70f
I0223 22:22:08.892927 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:08.892936 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:08.892945 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:08.892956 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:08.893597 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:09.386240 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:09.386263 80620 round_trippers.go:469] Request Headers:
I0223 22:22:09.386272 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:09.386278 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:09.388959 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:09.388983 80620 round_trippers.go:577] Response Headers:
I0223 22:22:09.388991 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:09.388997 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:09.389002 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:09 GMT
I0223 22:22:09.389007 80620 round_trippers.go:580] Audit-Id: b1b9610c-e081-4bbb-837e-8be581f68475
I0223 22:22:09.389013 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:09.389018 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:09.389296 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:09.389849 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:09.389877 80620 round_trippers.go:469] Request Headers:
I0223 22:22:09.389888 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:09.389895 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:09.391871 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:09.391888 80620 round_trippers.go:577] Response Headers:
I0223 22:22:09.391895 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:09.391900 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:09.391906 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:09.391911 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:09.391916 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:09 GMT
I0223 22:22:09.391930 80620 round_trippers.go:580] Audit-Id: 002294de-1a26-4570-886e-0a7800195800
I0223 22:22:09.392074 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:09.392445 80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
I0223 22:22:09.886775 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:09.886796 80620 round_trippers.go:469] Request Headers:
I0223 22:22:09.886805 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:09.886812 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:09.889680 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:09.889703 80620 round_trippers.go:577] Response Headers:
I0223 22:22:09.889710 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:09.889716 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:09 GMT
I0223 22:22:09.889722 80620 round_trippers.go:580] Audit-Id: 3a94f330-f28f-46c4-a648-51998b06aed1
I0223 22:22:09.889730 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:09.889740 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:09.889749 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:09.889960 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:09.890412 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:09.890426 80620 round_trippers.go:469] Request Headers:
I0223 22:22:09.890433 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:09.890439 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:09.893112 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:09.893124 80620 round_trippers.go:577] Response Headers:
I0223 22:22:09.893131 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:09 GMT
I0223 22:22:09.893136 80620 round_trippers.go:580] Audit-Id: f1b19073-36ac-4a4c-b6c5-aa4b69ec1776
I0223 22:22:09.893141 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:09.893148 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:09.893156 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:09.893165 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:09.893436 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:10.386076 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:10.386100 80620 round_trippers.go:469] Request Headers:
I0223 22:22:10.386109 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:10.386115 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:10.388462 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:10.388484 80620 round_trippers.go:577] Response Headers:
I0223 22:22:10.388491 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:10.388497 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:10.388502 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:10 GMT
I0223 22:22:10.388508 80620 round_trippers.go:580] Audit-Id: b0c0f970-513c-4958-8f0f-9012dbfa36d5
I0223 22:22:10.388513 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:10.388518 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:10.388755 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:10.389295 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:10.389312 80620 round_trippers.go:469] Request Headers:
I0223 22:22:10.389323 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:10.389333 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:10.391529 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:10.391550 80620 round_trippers.go:577] Response Headers:
I0223 22:22:10.391560 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:10.391568 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:10.391574 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:10.391582 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:10.391587 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:10 GMT
I0223 22:22:10.391593 80620 round_trippers.go:580] Audit-Id: 10261026-5803-485c-834a-bf21f0cb79e3
I0223 22:22:10.391676 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:10.886276 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:10.886298 80620 round_trippers.go:469] Request Headers:
I0223 22:22:10.886310 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:10.886319 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:10.890190 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:10.890215 80620 round_trippers.go:577] Response Headers:
I0223 22:22:10.890222 80620 round_trippers.go:580] Audit-Id: b6386ff9-de93-4709-b3ef-d903d0d5a9cc
I0223 22:22:10.890228 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:10.890234 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:10.890239 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:10.890245 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:10.890251 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:10 GMT
I0223 22:22:10.890402 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"836","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
I0223 22:22:10.890869 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:10.890883 80620 round_trippers.go:469] Request Headers:
I0223 22:22:10.890893 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:10.890902 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:10.895016 80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0223 22:22:10.895035 80620 round_trippers.go:577] Response Headers:
I0223 22:22:10.895046 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:10.895055 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:10.895064 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:10.895073 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:10 GMT
I0223 22:22:10.895080 80620 round_trippers.go:580] Audit-Id: 2e664d84-586c-4ab6-94bc-ba77835a654d
I0223 22:22:10.895085 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:10.895436 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.386154 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:11.386182 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.386193 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.386202 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.388774 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.388795 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.388805 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.388814 80620 round_trippers.go:580] Audit-Id: 0b53d934-8f77-4a2f-bbe6-92be4d3d5c17
I0223 22:22:11.388822 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.388831 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.388848 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.388858 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.389048 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"836","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
I0223 22:22:11.389509 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:11.389522 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.389532 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.389541 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.391436 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:11.391458 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.391475 80620 round_trippers.go:580] Audit-Id: f0d5469c-1828-43e0-99ac-880d59c5ca18
I0223 22:22:11.391486 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.391496 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.391502 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.391508 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.391514 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.392144 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.392489 80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
I0223 22:22:11.886705 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:11.886728 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.886740 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.886747 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.897949 80620 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0223 22:22:11.897972 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.897979 80620 round_trippers.go:580] Audit-Id: ee3fad82-cb14-466d-be80-d787cdfe18c6
I0223 22:22:11.897988 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.897996 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.898005 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.898014 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.898023 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.898203 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6491 chars]
I0223 22:22:11.898695 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:11.898709 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.898716 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.898722 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.901522 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.901537 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.901546 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.901555 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.901565 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.901574 80620 round_trippers.go:580] Audit-Id: 67ab3f98-4824-4d37-9baa-d6fde6241cd3
I0223 22:22:11.901583 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.901592 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.901884 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.902261 80620 pod_ready.go:92] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:11.902281 80620 pod_ready.go:81] duration metric: took 7.021599209s waiting for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.902292 80620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.902345 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773885
I0223 22:22:11.902362 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.902374 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.902387 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.905539 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:11.905555 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.905564 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.905573 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.905584 80620 round_trippers.go:580] Audit-Id: b11ef536-b4c5-482e-aa7c-76d59636d5d2
I0223 22:22:11.905592 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.905600 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.905608 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.906366 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"802","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6065 chars]
I0223 22:22:11.906856 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:11.906876 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.906892 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.906903 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.908814 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:11.908827 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.908833 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.908838 80620 round_trippers.go:580] Audit-Id: afa24933-99a3-4732-ab8c-89f796285545
I0223 22:22:11.908844 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.908849 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.908860 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.908868 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.909140 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.909495 80620 pod_ready.go:92] pod "etcd-multinode-773885" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:11.909509 80620 pod_ready.go:81] duration metric: took 7.209083ms waiting for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.909528 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.909582 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773885
I0223 22:22:11.909592 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.909603 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.909616 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.911700 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.911720 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.911729 80620 round_trippers.go:580] Audit-Id: 779ea438-bd06-40b6-ba45-805cc766e96d
I0223 22:22:11.911737 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.911745 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.911754 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.911762 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.911772 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.911987 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773885","namespace":"kube-system","uid":"f9cbb81f-f7c6-47e7-9e3c-393680d5ee52","resourceVersion":"793","creationTimestamp":"2023-02-23T22:17:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.240:8443","kubernetes.io/config.hash":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.mirror":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.seen":"2023-02-23T22:17:25.440360314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7599 chars]
I0223 22:22:11.912445 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:11.912459 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.912475 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.912485 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.914590 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.914610 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.914619 80620 round_trippers.go:580] Audit-Id: 05b9d526-86d7-43a1-a29b-8b19eb1394d1
I0223 22:22:11.914628 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.914637 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.914659 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.914670 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.914685 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.914841 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.915184 80620 pod_ready.go:92] pod "kube-apiserver-multinode-773885" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:11.915198 80620 pod_ready.go:81] duration metric: took 5.656927ms waiting for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.915207 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.915261 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773885
I0223 22:22:11.915271 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.915282 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.915294 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.917370 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.917390 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.917400 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.917407 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.917416 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.917424 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.917434 80620 round_trippers.go:580] Audit-Id: 1c6ec0cd-a712-46c0-9127-fc5aaaf54dca
I0223 22:22:11.917444 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.917666 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773885","namespace":"kube-system","uid":"df36fee9-6048-45f6-b17a-679c2c9e3daf","resourceVersion":"825","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.mirror":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.seen":"2023-02-23T22:17:38.195450048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7162 chars]
I0223 22:22:11.918056 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:11.918067 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.918078 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.918090 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.920329 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.920349 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.920359 80620 round_trippers.go:580] Audit-Id: 4abce7c0-9628-4d94-8005-2a2dfc23a6e7
I0223 22:22:11.920367 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.920377 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.920386 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.920394 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.920410 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.921292 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.921655 80620 pod_ready.go:92] pod "kube-controller-manager-multinode-773885" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:11.921672 80620 pod_ready.go:81] duration metric: took 6.456858ms waiting for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.921682 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.921744 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
I0223 22:22:11.921759 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.921770 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.921788 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.923979 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.923999 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.924008 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.924016 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.924024 80620 round_trippers.go:580] Audit-Id: 0efbb785-cf58-48c7-81ba-79e7df1fffe6
I0223 22:22:11.924037 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.924045 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.924054 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.924324 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5d5vn","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3dfcd7d-3514-4286-93e9-f51f9f91c2d7","resourceVersion":"491","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
I0223 22:22:11.924642 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
I0223 22:22:11.924651 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.924659 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.924668 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.927145 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.927164 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.927174 80620 round_trippers.go:580] Audit-Id: d525fadc-555c-4d29-8ba1-8f98e144287a
I0223 22:22:11.927190 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.927201 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.927209 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.927221 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.927230 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.927662 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m02","uid":"6657df38-0b72-4f36-a536-d4626cf22c9b","resourceVersion":"560","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
I0223 22:22:11.927907 80620 pod_ready.go:92] pod "kube-proxy-5d5vn" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:11.927917 80620 pod_ready.go:81] duration metric: took 6.229355ms waiting for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.927924 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
I0223 22:22:12.087372 80620 request.go:622] Waited for 159.388811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
I0223 22:22:12.087472 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
I0223 22:22:12.087484 80620 round_trippers.go:469] Request Headers:
I0223 22:22:12.087494 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:12.087506 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:12.090953 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:12.090975 80620 round_trippers.go:577] Response Headers:
I0223 22:22:12.090982 80620 round_trippers.go:580] Audit-Id: d476c971-82f9-4e13-bf24-ac1d0a7e0132
I0223 22:22:12.090988 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:12.091000 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:12.091015 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:12.091023 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:12.091034 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:12 GMT
I0223 22:22:12.091257 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mdjks","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1cb3f4c-effa-4f0e-bbaa-ff792325a571","resourceVersion":"751","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
I0223 22:22:12.287106 80620 request.go:622] Waited for 195.345935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:12.287171 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:12.287176 80620 round_trippers.go:469] Request Headers:
I0223 22:22:12.287184 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:12.287190 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:12.290450 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:12.290482 80620 round_trippers.go:577] Response Headers:
I0223 22:22:12.290493 80620 round_trippers.go:580] Audit-Id: 293be0f3-4481-47c8-8397-f5bcd5d19b91
I0223 22:22:12.290503 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:12.290511 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:12.290527 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:12.290541 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:12.290550 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:12 GMT
I0223 22:22:12.290685 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:12.290991 80620 pod_ready.go:92] pod "kube-proxy-mdjks" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:12.291002 80620 pod_ready.go:81] duration metric: took 363.073923ms waiting for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
I0223 22:22:12.291011 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
I0223 22:22:12.487380 80620 request.go:622] Waited for 196.297867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
I0223 22:22:12.487451 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
I0223 22:22:12.487455 80620 round_trippers.go:469] Request Headers:
I0223 22:22:12.487463 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:12.487470 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:12.490351 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:12.490369 80620 round_trippers.go:577] Response Headers:
I0223 22:22:12.490376 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:12.490382 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:12.490390 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:12.490396 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:12.490402 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:12 GMT
I0223 22:22:12.490408 80620 round_trippers.go:580] Audit-Id: 3101849d-f3a0-4ede-99b6-2a380cea5ba6
I0223 22:22:12.490636 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psgdt","generateName":"kube-proxy-","namespace":"kube-system","uid":"57d8204d-38f2-413f-8855-237db379cd27","resourceVersion":"721","creationTimestamp":"2023-02-23T22:19:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:19:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I0223 22:22:12.687374 80620 request.go:622] Waited for 196.32053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
I0223 22:22:12.687452 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
I0223 22:22:12.687458 80620 round_trippers.go:469] Request Headers:
I0223 22:22:12.687466 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:12.687472 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:12.690923 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:12.690945 80620 round_trippers.go:577] Response Headers:
I0223 22:22:12.690952 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:12.690958 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:12.690963 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:12.690969 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:12 GMT
I0223 22:22:12.690975 80620 round_trippers.go:580] Audit-Id: f8604e33-edeb-42ae-8e19-5e27a6bd8d7d
I0223 22:22:12.690980 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:12.693472 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m03","uid":"22181ea8-5030-450a-9927-f28a8241ef6a","resourceVersion":"732","creationTimestamp":"2023-02-23T22:20:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4329 chars]
I0223 22:22:12.693842 80620 pod_ready.go:92] pod "kube-proxy-psgdt" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:12.693857 80620 pod_ready.go:81] duration metric: took 402.838971ms waiting for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
I0223 22:22:12.693868 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:12.886856 80620 request.go:622] Waited for 192.90851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
I0223 22:22:12.886917 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
I0223 22:22:12.886932 80620 round_trippers.go:469] Request Headers:
I0223 22:22:12.886943 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:12.886952 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:12.893080 80620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0223 22:22:12.893102 80620 round_trippers.go:577] Response Headers:
I0223 22:22:12.893109 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:12 GMT
I0223 22:22:12.893115 80620 round_trippers.go:580] Audit-Id: 854e2fd9-4c25-4b2f-bc59-61d21fabfb74
I0223 22:22:12.893120 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:12.893125 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:12.893131 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:12.893136 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:12.893332 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773885","namespace":"kube-system","uid":"ecc1fa39-40dc-4d57-be46-8e9a01431180","resourceVersion":"786","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.mirror":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.seen":"2023-02-23T22:17:38.195431871Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4892 chars]
I0223 22:22:13.087065 80620 request.go:622] Waited for 193.332526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:13.087127 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:13.087133 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.087143 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.087153 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.091144 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:13.091162 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.091169 80620 round_trippers.go:580] Audit-Id: bf568af1-d7fc-4da0-9559-42a27fc0cef3
I0223 22:22:13.091175 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.091181 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.091186 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.091198 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.091210 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.091630 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:13.091948 80620 pod_ready.go:92] pod "kube-scheduler-multinode-773885" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:13.091980 80620 pod_ready.go:81] duration metric: took 398.085634ms waiting for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:13.091998 80620 pod_ready.go:38] duration metric: took 8.218220101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 22:22:13.092020 80620 api_server.go:51] waiting for apiserver process to appear ...
I0223 22:22:13.092066 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:22:13.104775 80620 command_runner.go:130] > 1675
I0223 22:22:13.104818 80620 api_server.go:71] duration metric: took 14.412044719s to wait for apiserver process to appear ...
I0223 22:22:13.104835 80620 api_server.go:87] waiting for apiserver healthz status ...
I0223 22:22:13.104847 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:22:13.110111 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
ok
I0223 22:22:13.110176 80620 round_trippers.go:463] GET https://192.168.39.240:8443/version
I0223 22:22:13.110187 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.110206 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.110217 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.110872 80620 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0223 22:22:13.110888 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.110895 80620 round_trippers.go:580] Audit-Id: 4f7ff6ce-bed0-47c2-918d-6dd15db9ce31
I0223 22:22:13.110901 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.110906 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.110911 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.110918 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.110923 80620 round_trippers.go:580] Content-Length: 263
I0223 22:22:13.110930 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.110950 80620 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.1",
"gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
"gitTreeState": "clean",
"buildDate": "2023-01-18T15:51:25Z",
"goVersion": "go1.19.5",
"compiler": "gc",
"platform": "linux/amd64"
}
I0223 22:22:13.111007 80620 api_server.go:140] control plane version: v1.26.1
I0223 22:22:13.111018 80620 api_server.go:130] duration metric: took 6.177354ms to wait for apiserver health ...
I0223 22:22:13.111024 80620 system_pods.go:43] waiting for kube-system pods to appear ...
I0223 22:22:13.287730 80620 request.go:622] Waited for 176.607463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:22:13.287780 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:22:13.287784 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.287794 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.287804 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.292061 80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0223 22:22:13.292080 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.292087 80620 round_trippers.go:580] Audit-Id: 8f903081-07eb-4386-b54e-2c988265836f
I0223 22:22:13.292096 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.292104 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.292110 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.292116 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.292121 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.294183 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82875 chars]
I0223 22:22:13.296686 80620 system_pods.go:59] 12 kube-system pods found
I0223 22:22:13.296706 80620 system_pods.go:61] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
I0223 22:22:13.296711 80620 system_pods.go:61] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running
I0223 22:22:13.296715 80620 system_pods.go:61] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
I0223 22:22:13.296719 80620 system_pods.go:61] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
I0223 22:22:13.296723 80620 system_pods.go:61] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
I0223 22:22:13.296727 80620 system_pods.go:61] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
I0223 22:22:13.296731 80620 system_pods.go:61] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running
I0223 22:22:13.296737 80620 system_pods.go:61] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
I0223 22:22:13.296741 80620 system_pods.go:61] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
I0223 22:22:13.296745 80620 system_pods.go:61] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
I0223 22:22:13.296750 80620 system_pods.go:61] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running
I0223 22:22:13.296754 80620 system_pods.go:61] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
I0223 22:22:13.296759 80620 system_pods.go:74] duration metric: took 185.729884ms to wait for pod list to return data ...
I0223 22:22:13.296768 80620 default_sa.go:34] waiting for default service account to be created ...
I0223 22:22:13.487059 80620 request.go:622] Waited for 190.213748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
I0223 22:22:13.487142 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
I0223 22:22:13.487151 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.487163 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.487179 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.490660 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:13.490686 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.490698 80620 round_trippers.go:580] Content-Length: 261
I0223 22:22:13.490707 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.490715 80620 round_trippers.go:580] Audit-Id: b33f914f-7659-4fc8-8f76-26f7e677ba77
I0223 22:22:13.490724 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.490733 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.490746 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.490755 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.490784 80620 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"860"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"62ac0740-2090-4217-a812-0d7ea88a967e","resourceVersion":"301","creationTimestamp":"2023-02-23T22:17:49Z"}}]}
I0223 22:22:13.491028 80620 default_sa.go:45] found service account: "default"
I0223 22:22:13.491048 80620 default_sa.go:55] duration metric: took 194.273065ms for default service account to be created ...
I0223 22:22:13.491059 80620 system_pods.go:116] waiting for k8s-apps to be running ...
I0223 22:22:13.687553 80620 request.go:622] Waited for 196.395892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:22:13.687624 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:22:13.687630 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.687642 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.687659 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.691923 80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0223 22:22:13.691949 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.691960 80620 round_trippers.go:580] Audit-Id: b99f1d26-3de6-4548-9948-e1ef63d9e02a
I0223 22:22:13.691969 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.691980 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.691988 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.691997 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.692005 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.693522 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82875 chars]
I0223 22:22:13.695955 80620 system_pods.go:86] 12 kube-system pods found
I0223 22:22:13.695978 80620 system_pods.go:89] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
I0223 22:22:13.695985 80620 system_pods.go:89] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running
I0223 22:22:13.695993 80620 system_pods.go:89] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
I0223 22:22:13.695999 80620 system_pods.go:89] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
I0223 22:22:13.696005 80620 system_pods.go:89] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
I0223 22:22:13.696012 80620 system_pods.go:89] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
I0223 22:22:13.696020 80620 system_pods.go:89] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running
I0223 22:22:13.696028 80620 system_pods.go:89] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
I0223 22:22:13.696040 80620 system_pods.go:89] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
I0223 22:22:13.696048 80620 system_pods.go:89] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
I0223 22:22:13.696055 80620 system_pods.go:89] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running
I0223 22:22:13.696061 80620 system_pods.go:89] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
I0223 22:22:13.696071 80620 system_pods.go:126] duration metric: took 205.005964ms to wait for k8s-apps to be running ...
I0223 22:22:13.696085 80620 system_svc.go:44] waiting for kubelet service to be running ....
I0223 22:22:13.696135 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 22:22:13.709623 80620 system_svc.go:56] duration metric: took 13.531533ms WaitForService to wait for kubelet.
I0223 22:22:13.709679 80620 kubeadm.go:578] duration metric: took 15.016875282s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0223 22:22:13.709713 80620 node_conditions.go:102] verifying NodePressure condition ...
I0223 22:22:13.887138 80620 request.go:622] Waited for 177.351024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
I0223 22:22:13.887250 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
I0223 22:22:13.887261 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.887269 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.887276 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.889579 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:13.889601 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.889608 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.889614 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.889620 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.889625 80620 round_trippers.go:580] Audit-Id: 4402b5a7-68c0-489c-bf87-bedbd28a14fe
I0223 22:22:13.889631 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.889636 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.889855 80620 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"862"},"items":[{"metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16192 chars]
I0223 22:22:13.890436 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:22:13.890455 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:22:13.890468 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:22:13.890474 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:22:13.890481 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:22:13.890489 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:22:13.890496 80620 node_conditions.go:105] duration metric: took 180.777399ms to run NodePressure ...
I0223 22:22:13.890512 80620 start.go:228] waiting for startup goroutines ...
I0223 22:22:13.890522 80620 start.go:233] waiting for cluster config update ...
I0223 22:22:13.890533 80620 start.go:242] writing updated cluster config ...
I0223 22:22:13.890966 80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 22:22:13.891077 80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
I0223 22:22:13.893728 80620 out.go:177] * Starting worker node multinode-773885-m02 in cluster multinode-773885
I0223 22:22:13.895212 80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 22:22:13.895236 80620 cache.go:57] Caching tarball of preloaded images
I0223 22:22:13.895333 80620 preload.go:174] Found /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0223 22:22:13.895345 80620 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0223 22:22:13.895468 80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
I0223 22:22:13.895625 80620 cache.go:193] Successfully downloaded all kic artifacts
I0223 22:22:13.895655 80620 start.go:364] acquiring machines lock for multinode-773885-m02: {Name:mk190e887b13a8e75fbaa786555e3f621b6db823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0223 22:22:13.895705 80620 start.go:368] acquired machines lock for "multinode-773885-m02" in 30.081µs
I0223 22:22:13.895724 80620 start.go:96] Skipping create...Using existing machine configuration
I0223 22:22:13.895732 80620 fix.go:55] fixHost starting: m02
I0223 22:22:13.896010 80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 22:22:13.896038 80620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 22:22:13.910341 80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
I0223 22:22:13.910796 80620 main.go:141] libmachine: () Calling .GetVersion
I0223 22:22:13.911318 80620 main.go:141] libmachine: Using API Version 1
I0223 22:22:13.911343 80620 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 22:22:13.911672 80620 main.go:141] libmachine: () Calling .GetMachineName
I0223 22:22:13.911860 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:13.911979 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetState
I0223 22:22:13.913566 80620 fix.go:103] recreateIfNeeded on multinode-773885-m02: state=Stopped err=<nil>
I0223 22:22:13.913585 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
W0223 22:22:13.913746 80620 fix.go:129] unexpected machine state, will restart: <nil>
I0223 22:22:13.915708 80620 out.go:177] * Restarting existing kvm2 VM for "multinode-773885-m02" ...
I0223 22:22:13.917009 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .Start
I0223 22:22:13.917151 80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring networks are active...
I0223 22:22:13.917783 80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring network default is active
I0223 22:22:13.918134 80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring network mk-multinode-773885 is active
I0223 22:22:13.918457 80620 main.go:141] libmachine: (multinode-773885-m02) Getting domain xml...
I0223 22:22:13.919047 80620 main.go:141] libmachine: (multinode-773885-m02) Creating domain...
I0223 22:22:15.148655 80620 main.go:141] libmachine: (multinode-773885-m02) Waiting to get IP...
I0223 22:22:15.149521 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:15.149889 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:15.149974 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.149904 80738 retry.go:31] will retry after 193.258579ms: waiting for machine to come up
I0223 22:22:15.344335 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:15.344701 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:15.344731 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.344650 80738 retry.go:31] will retry after 325.897575ms: waiting for machine to come up
I0223 22:22:15.672194 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:15.672594 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:15.672628 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.672550 80738 retry.go:31] will retry after 464.389068ms: waiting for machine to come up
I0223 22:22:16.138184 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:16.138690 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:16.138753 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:16.138682 80738 retry.go:31] will retry after 418.748231ms: waiting for machine to come up
I0223 22:22:16.559096 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:16.559605 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:16.559635 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:16.559550 80738 retry.go:31] will retry after 471.42311ms: waiting for machine to come up
I0223 22:22:17.033003 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:17.033388 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:17.033425 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:17.033349 80738 retry.go:31] will retry after 716.223287ms: waiting for machine to come up
I0223 22:22:17.751192 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:17.751627 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:17.751662 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:17.751564 80738 retry.go:31] will retry after 829.526019ms: waiting for machine to come up
I0223 22:22:18.582469 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:18.582861 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:18.582893 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:18.582810 80738 retry.go:31] will retry after 1.314736274s: waiting for machine to come up
I0223 22:22:19.898527 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:19.898968 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:19.898996 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:19.898923 80738 retry.go:31] will retry after 1.848898641s: waiting for machine to come up
I0223 22:22:21.749410 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:21.749799 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:21.749831 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:21.749746 80738 retry.go:31] will retry after 1.422968619s: waiting for machine to come up
I0223 22:22:23.174280 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:23.174762 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:23.174796 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:23.174689 80738 retry.go:31] will retry after 2.26457317s: waiting for machine to come up
I0223 22:22:25.440649 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:25.441040 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:25.441077 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:25.441025 80738 retry.go:31] will retry after 2.412299301s: waiting for machine to come up
I0223 22:22:27.856562 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:27.857000 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:27.857029 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:27.856943 80738 retry.go:31] will retry after 3.510265055s: waiting for machine to come up
I0223 22:22:31.369182 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.369590 80620 main.go:141] libmachine: (multinode-773885-m02) Found IP for machine: 192.168.39.102
I0223 22:22:31.369622 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has current primary IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.369632 80620 main.go:141] libmachine: (multinode-773885-m02) Reserving static IP address...
I0223 22:22:31.370012 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "multinode-773885-m02", mac: "52:54:00:b1:bb:00", ip: "192.168.39.102"} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.370035 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | skip adding static IP to network mk-multinode-773885 - found existing host DHCP lease matching {name: "multinode-773885-m02", mac: "52:54:00:b1:bb:00", ip: "192.168.39.102"}
I0223 22:22:31.370045 80620 main.go:141] libmachine: (multinode-773885-m02) Reserved static IP address: 192.168.39.102
I0223 22:22:31.370056 80620 main.go:141] libmachine: (multinode-773885-m02) Waiting for SSH to be available...
I0223 22:22:31.370068 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Getting to WaitForSSH function...
I0223 22:22:31.372076 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.372417 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.372440 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.372551 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Using SSH client type: external
I0223 22:22:31.372572 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa (-rw-------)
I0223 22:22:31.372608 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0223 22:22:31.372622 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | About to run SSH command:
I0223 22:22:31.372638 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | exit 0
I0223 22:22:31.506747 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | SSH cmd err, output: <nil>:
I0223 22:22:31.507041 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetConfigRaw
I0223 22:22:31.507719 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
I0223 22:22:31.510014 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.510356 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.510390 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.510652 80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
I0223 22:22:31.510883 80620 machine.go:88] provisioning docker machine ...
I0223 22:22:31.510909 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:31.511142 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
I0223 22:22:31.511321 80620 buildroot.go:166] provisioning hostname "multinode-773885-m02"
I0223 22:22:31.511339 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
I0223 22:22:31.511489 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:31.513584 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.513939 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.513969 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.514122 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:31.514268 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:31.514404 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:31.514532 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:31.514655 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:31.515234 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:31.515255 80620 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-773885-m02 && echo "multinode-773885-m02" | sudo tee /etc/hostname
I0223 22:22:31.655693 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773885-m02
I0223 22:22:31.655725 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:31.658407 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.658788 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.658815 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.658999 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:31.659184 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:31.659347 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:31.659464 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:31.659613 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:31.660176 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:31.660212 80620 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-773885-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773885-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-773885-m02' | sudo tee -a /etc/hosts;
fi
fi
I0223 22:22:31.799792 80620 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 22:22:31.799859 80620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-59858/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-59858/.minikube}
I0223 22:22:31.799879 80620 buildroot.go:174] setting up certificates
I0223 22:22:31.799889 80620 provision.go:83] configureAuth start
I0223 22:22:31.799902 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
I0223 22:22:31.800252 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
I0223 22:22:31.803534 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.803989 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.804018 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.804274 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:31.806753 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.807088 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.807121 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.807237 80620 provision.go:138] copyHostCerts
I0223 22:22:31.807268 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
I0223 22:22:31.807311 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem, removing ...
I0223 22:22:31.807324 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
I0223 22:22:31.807414 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem (1671 bytes)
I0223 22:22:31.807572 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
I0223 22:22:31.807597 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem, removing ...
I0223 22:22:31.807602 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
I0223 22:22:31.807632 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem (1078 bytes)
I0223 22:22:31.807685 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
I0223 22:22:31.807702 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem, removing ...
I0223 22:22:31.807707 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
I0223 22:22:31.807729 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem (1123 bytes)
I0223 22:22:31.807773 80620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem org=jenkins.multinode-773885-m02 san=[192.168.39.102 192.168.39.102 localhost 127.0.0.1 minikube multinode-773885-m02]
I0223 22:22:32.063720 80620 provision.go:172] copyRemoteCerts
I0223 22:22:32.063776 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0223 22:22:32.063800 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:32.066310 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.066712 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:32.066742 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.066876 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:32.067090 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.067230 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:32.067359 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
I0223 22:22:32.161807 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0223 22:22:32.161874 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0223 22:22:32.184819 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem -> /etc/docker/server.pem
I0223 22:22:32.184883 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0223 22:22:32.206537 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0223 22:22:32.206625 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0223 22:22:32.228031 80620 provision.go:86] duration metric: configureAuth took 428.129514ms
I0223 22:22:32.228052 80620 buildroot.go:189] setting minikube options for container-runtime
I0223 22:22:32.228295 80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 22:22:32.228322 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:32.228634 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:32.231144 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.231489 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:32.231520 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.231601 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:32.231819 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.231999 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.232117 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:32.232312 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:32.232708 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:32.232719 80620 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0223 22:22:32.365102 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0223 22:22:32.365122 80620 buildroot.go:70] root file system type: tmpfs
I0223 22:22:32.365241 80620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0223 22:22:32.365265 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:32.367818 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.368241 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:32.368263 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.368492 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:32.368703 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.368872 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.368982 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:32.369180 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:32.369581 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:32.369639 80620 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.240"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0223 22:22:32.513495 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.240
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0223 22:22:32.513523 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:32.515906 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.516266 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:32.516300 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.516468 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:32.516680 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.516873 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.517028 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:32.517178 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:32.517625 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:32.517648 80620 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0223 22:22:33.354684 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0223 22:22:33.354711 80620 machine.go:91] provisioned docker machine in 1.843811829s
I0223 22:22:33.354721 80620 start.go:300] post-start starting for "multinode-773885-m02" (driver="kvm2")
I0223 22:22:33.354729 80620 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0223 22:22:33.354752 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:33.355077 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0223 22:22:33.355108 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:33.357808 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.358150 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:33.358170 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.358307 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:33.358509 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:33.358697 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:33.358856 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
I0223 22:22:33.452337 80620 ssh_runner.go:195] Run: cat /etc/os-release
I0223 22:22:33.456207 80620 command_runner.go:130] > NAME=Buildroot
I0223 22:22:33.456227 80620 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
I0223 22:22:33.456233 80620 command_runner.go:130] > ID=buildroot
I0223 22:22:33.456241 80620 command_runner.go:130] > VERSION_ID=2021.02.12
I0223 22:22:33.456248 80620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0223 22:22:33.456287 80620 info.go:137] Remote host: Buildroot 2021.02.12
I0223 22:22:33.456303 80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/addons for local assets ...
I0223 22:22:33.456371 80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/files for local assets ...
I0223 22:22:33.456462 80620 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> 669272.pem in /etc/ssl/certs
I0223 22:22:33.456474 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /etc/ssl/certs/669272.pem
I0223 22:22:33.456577 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0223 22:22:33.464384 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /etc/ssl/certs/669272.pem (1708 bytes)
I0223 22:22:33.486196 80620 start.go:303] post-start completed in 131.456152ms
I0223 22:22:33.486221 80620 fix.go:57] fixHost completed within 19.590489491s
I0223 22:22:33.486246 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:33.488925 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.489233 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:33.489259 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.489444 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:33.489642 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:33.489819 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:33.489958 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:33.490087 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:33.490502 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:33.490517 80620 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0223 22:22:33.619595 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677190953.568894594
I0223 22:22:33.619615 80620 fix.go:207] guest clock: 1677190953.568894594
I0223 22:22:33.619622 80620 fix.go:220] Guest: 2023-02-23 22:22:33.568894594 +0000 UTC Remote: 2023-02-23 22:22:33.48622588 +0000 UTC m=+80.262153220 (delta=82.668714ms)
I0223 22:22:33.619636 80620 fix.go:191] guest clock delta is within tolerance: 82.668714ms
I0223 22:22:33.619643 80620 start.go:83] releasing machines lock for "multinode-773885-m02", held for 19.723927358s
I0223 22:22:33.619668 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:33.619923 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
I0223 22:22:33.622598 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.623025 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:33.623058 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.625082 80620 out.go:177] * Found network options:
I0223 22:22:33.626668 80620 out.go:177] - NO_PROXY=192.168.39.240
W0223 22:22:33.628011 80620 proxy.go:119] fail to check proxy env: Error ip not in block
I0223 22:22:33.628044 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:33.628608 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:33.628794 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:33.628886 80620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0223 22:22:33.628929 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
W0223 22:22:33.629039 80620 proxy.go:119] fail to check proxy env: Error ip not in block
I0223 22:22:33.629123 80620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0223 22:22:33.629150 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:33.631754 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.631877 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.632173 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:33.632199 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.632233 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:33.632253 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.632406 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:33.632530 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:33.632612 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:33.632687 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:33.632797 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:33.632952 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:33.632945 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
I0223 22:22:33.633068 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
I0223 22:22:33.747533 80620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0223 22:22:33.748590 80620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0223 22:22:33.748617 80620 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0223 22:22:33.748665 80620 ssh_runner.go:195] Run: which cri-dockerd
I0223 22:22:33.752644 80620 command_runner.go:130] > /usr/bin/cri-dockerd
I0223 22:22:33.752772 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0223 22:22:33.762613 80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0223 22:22:33.779129 80620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0223 22:22:33.794495 80620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0223 22:22:33.794614 80620 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0223 22:22:33.794634 80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 22:22:33.794710 80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 22:22:33.819645 80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0223 22:22:33.819665 80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0223 22:22:33.819671 80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0223 22:22:33.819676 80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0223 22:22:33.819680 80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0223 22:22:33.819684 80620 command_runner.go:130] > registry.k8s.io/pause:3.9
I0223 22:22:33.819688 80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
I0223 22:22:33.819694 80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0223 22:22:33.819697 80620 command_runner.go:130] > registry.k8s.io/pause:3.6
I0223 22:22:33.819702 80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0223 22:22:33.819707 80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0223 22:22:33.821344 80620 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0223 22:22:33.821366 80620 docker.go:560] Images already preloaded, skipping extraction
I0223 22:22:33.821378 80620 start.go:485] detecting cgroup driver to use...
I0223 22:22:33.821513 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 22:22:33.838092 80620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0223 22:22:33.838113 80620 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0223 22:22:33.838173 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0223 22:22:33.849104 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0223 22:22:33.860042 80620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0223 22:22:33.860082 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0223 22:22:33.871017 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 22:22:33.881892 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0223 22:22:33.892548 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 22:22:33.903374 80620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0223 22:22:33.914628 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0223 22:22:33.925877 80620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0223 22:22:33.935581 80620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0223 22:22:33.935636 80620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0223 22:22:33.945618 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:22:34.050114 80620 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 22:22:34.068154 80620 start.go:485] detecting cgroup driver to use...
I0223 22:22:34.068229 80620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0223 22:22:34.089986 80620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0223 22:22:34.090009 80620 command_runner.go:130] > [Unit]
I0223 22:22:34.090019 80620 command_runner.go:130] > Description=Docker Application Container Engine
I0223 22:22:34.090033 80620 command_runner.go:130] > Documentation=https://docs.docker.com
I0223 22:22:34.090041 80620 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0223 22:22:34.090049 80620 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0223 22:22:34.090056 80620 command_runner.go:130] > StartLimitBurst=3
I0223 22:22:34.090063 80620 command_runner.go:130] > StartLimitIntervalSec=60
I0223 22:22:34.090072 80620 command_runner.go:130] > [Service]
I0223 22:22:34.090083 80620 command_runner.go:130] > Type=notify
I0223 22:22:34.090089 80620 command_runner.go:130] > Restart=on-failure
I0223 22:22:34.090104 80620 command_runner.go:130] > Environment=NO_PROXY=192.168.39.240
I0223 22:22:34.090111 80620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0223 22:22:34.090118 80620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0223 22:22:34.090150 80620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0223 22:22:34.090164 80620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0223 22:22:34.090170 80620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0223 22:22:34.090176 80620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0223 22:22:34.090182 80620 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0223 22:22:34.090190 80620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0223 22:22:34.090196 80620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0223 22:22:34.090200 80620 command_runner.go:130] > ExecStart=
I0223 22:22:34.090213 80620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0223 22:22:34.090219 80620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0223 22:22:34.090224 80620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0223 22:22:34.090233 80620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0223 22:22:34.090237 80620 command_runner.go:130] > LimitNOFILE=infinity
I0223 22:22:34.090241 80620 command_runner.go:130] > LimitNPROC=infinity
I0223 22:22:34.090245 80620 command_runner.go:130] > LimitCORE=infinity
I0223 22:22:34.090251 80620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0223 22:22:34.090256 80620 command_runner.go:130] > # Only systemd 226 and above support this version.
I0223 22:22:34.090260 80620 command_runner.go:130] > TasksMax=infinity
I0223 22:22:34.090265 80620 command_runner.go:130] > TimeoutStartSec=0
I0223 22:22:34.090273 80620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0223 22:22:34.090279 80620 command_runner.go:130] > Delegate=yes
I0223 22:22:34.090285 80620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0223 22:22:34.090293 80620 command_runner.go:130] > KillMode=process
I0223 22:22:34.090297 80620 command_runner.go:130] > [Install]
I0223 22:22:34.090302 80620 command_runner.go:130] > WantedBy=multi-user.target
I0223 22:22:34.090359 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0223 22:22:34.105030 80620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0223 22:22:34.126591 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0223 22:22:34.140060 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 22:22:34.153929 80620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0223 22:22:34.184699 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 22:22:34.197888 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 22:22:34.214560 80620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0223 22:22:34.214588 80620 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0223 22:22:34.214922 80620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0223 22:22:34.314415 80620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0223 22:22:34.423777 80620 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0223 22:22:34.423812 80620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0223 22:22:34.439350 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:22:34.539377 80620 ssh_runner.go:195] Run: sudo systemctl restart docker
I0223 22:22:35.976151 80620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.436733266s)
I0223 22:22:35.976218 80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0223 22:22:36.088366 80620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0223 22:22:36.208338 80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0223 22:22:36.318554 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:22:36.423882 80620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0223 22:22:36.438700 80620 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
I0223 22:22:36.441277 80620 out.go:177]
W0223 22:22:36.442813 80620 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0223 22:22:36.442833 80620 out.go:239] *
*
W0223 22:22:36.443730 80620 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0223 22:22:36.445382 80620 out.go:177]
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-linux-amd64 node list -p multinode-773885" : exit status 90
multinode_test.go:298: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-773885
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-773885 -n multinode-773885
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-773885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-773885 logs -n 25: (1.31682533s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-773885 ssh -n | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | /tmp/TestMultiNodeserialCopyFile4107524372/001/cp-test_multinode-773885-m02.txt | | | | | |
| ssh | multinode-773885 ssh -n | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885:/home/docker/cp-test_multinode-773885-m02_multinode-773885.txt | | | | | |
| ssh | multinode-773885 ssh -n | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-773885 ssh -n multinode-773885 sudo cat | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | /home/docker/cp-test_multinode-773885-m02_multinode-773885.txt | | | | | |
| cp | multinode-773885 cp multinode-773885-m02:/home/docker/cp-test.txt | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m03:/home/docker/cp-test_multinode-773885-m02_multinode-773885-m03.txt | | | | | |
| ssh | multinode-773885 ssh -n | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-773885 ssh -n multinode-773885-m03 sudo cat | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | /home/docker/cp-test_multinode-773885-m02_multinode-773885-m03.txt | | | | | |
| cp | multinode-773885 cp testdata/cp-test.txt | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-773885 ssh -n | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | /tmp/TestMultiNodeserialCopyFile4107524372/001/cp-test_multinode-773885-m03.txt | | | | | |
| ssh | multinode-773885 ssh -n | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885:/home/docker/cp-test_multinode-773885-m03_multinode-773885.txt | | | | | |
| ssh | multinode-773885 ssh -n | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-773885 ssh -n multinode-773885 sudo cat | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | /home/docker/cp-test_multinode-773885-m03_multinode-773885.txt | | | | | |
| cp | multinode-773885 cp multinode-773885-m03:/home/docker/cp-test.txt | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m02:/home/docker/cp-test_multinode-773885-m03_multinode-773885-m02.txt | | | | | |
| ssh | multinode-773885 ssh -n | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | multinode-773885-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-773885 ssh -n multinode-773885-m02 sudo cat | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | /home/docker/cp-test_multinode-773885-m03_multinode-773885-m02.txt | | | | | |
| node | multinode-773885 node stop m03 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| node | multinode-773885 node start | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:20 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-773885 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | |
| stop | -p multinode-773885 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:20 UTC | 23 Feb 23 22:21 UTC |
| start | -p multinode-773885 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:21 UTC | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-773885 | multinode-773885 | jenkins | v1.29.0 | 23 Feb 23 22:22 UTC | |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/02/23 22:21:13
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.20.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0223 22:21:13.262206 80620 out.go:296] Setting OutFile to fd 1 ...
I0223 22:21:13.262485 80620 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 22:21:13.262530 80620 out.go:309] Setting ErrFile to fd 2...
I0223 22:21:13.262547 80620 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 22:21:13.263007 80620 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-59858/.minikube/bin
I0223 22:21:13.263577 80620 out.go:303] Setting JSON to false
I0223 22:21:13.264336 80620 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7426,"bootTime":1677183448,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0223 22:21:13.264396 80620 start.go:135] virtualization: kvm guest
I0223 22:21:13.267622 80620 out.go:177] * [multinode-773885] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0223 22:21:13.268914 80620 out.go:177] - MINIKUBE_LOCATION=15909
I0223 22:21:13.268968 80620 notify.go:220] Checking for updates...
I0223 22:21:13.270444 80620 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0223 22:21:13.271889 80620 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15909-59858/kubeconfig
I0223 22:21:13.273288 80620 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-59858/.minikube
I0223 22:21:13.274630 80620 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0223 22:21:13.275971 80620 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0223 22:21:13.277689 80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 22:21:13.277751 80620 driver.go:365] Setting default libvirt URI to qemu:///system
I0223 22:21:13.278270 80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 22:21:13.278328 80620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 22:21:13.292096 80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
I0223 22:21:13.292502 80620 main.go:141] libmachine: () Calling .GetVersion
I0223 22:21:13.293077 80620 main.go:141] libmachine: Using API Version 1
I0223 22:21:13.293100 80620 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 22:21:13.293421 80620 main.go:141] libmachine: () Calling .GetMachineName
I0223 22:21:13.293604 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:13.326142 80620 out.go:177] * Using the kvm2 driver based on existing profile
I0223 22:21:13.327601 80620 start.go:296] selected driver: kvm2
I0223 22:21:13.327615 80620 start.go:857] validating driver "kvm2" against &{Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacce
l:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP:}
I0223 22:21:13.327745 80620 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0223 22:21:13.327989 80620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 22:21:13.328051 80620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-59858/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0223 22:21:13.341443 80620 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0223 22:21:13.342073 80620 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0223 22:21:13.342106 80620 cni.go:84] Creating CNI manager for ""
I0223 22:21:13.342116 80620 cni.go:136] 3 nodes found, recommending kindnet
I0223 22:21:13.342128 80620 start_flags.go:319] config:
{Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false ko
ng:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 22:21:13.342256 80620 iso.go:125] acquiring lock: {Name:mka4f25d544a3ff8c2a2fab814177dd4b23f9fc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 22:21:13.344079 80620 out.go:177] * Starting control plane node multinode-773885 in cluster multinode-773885
I0223 22:21:13.345362 80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 22:21:13.345394 80620 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0223 22:21:13.345409 80620 cache.go:57] Caching tarball of preloaded images
I0223 22:21:13.345481 80620 preload.go:174] Found /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0223 22:21:13.345493 80620 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0223 22:21:13.345663 80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
I0223 22:21:13.345836 80620 cache.go:193] Successfully downloaded all kic artifacts
I0223 22:21:13.345858 80620 start.go:364] acquiring machines lock for multinode-773885: {Name:mk190e887b13a8e75fbaa786555e3f621b6db823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0223 22:21:13.345897 80620 start.go:368] acquired machines lock for "multinode-773885" in 21.539µs
I0223 22:21:13.345910 80620 start.go:96] Skipping create...Using existing machine configuration
I0223 22:21:13.345916 80620 fix.go:55] fixHost starting:
I0223 22:21:13.346182 80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 22:21:13.346210 80620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 22:21:13.358898 80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
I0223 22:21:13.359326 80620 main.go:141] libmachine: () Calling .GetVersion
I0223 22:21:13.359874 80620 main.go:141] libmachine: Using API Version 1
I0223 22:21:13.359895 80620 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 22:21:13.360176 80620 main.go:141] libmachine: () Calling .GetMachineName
I0223 22:21:13.360338 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:13.360464 80620 main.go:141] libmachine: (multinode-773885) Calling .GetState
I0223 22:21:13.361968 80620 fix.go:103] recreateIfNeeded on multinode-773885: state=Stopped err=<nil>
I0223 22:21:13.361991 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
W0223 22:21:13.362122 80620 fix.go:129] unexpected machine state, will restart: <nil>
I0223 22:21:13.364431 80620 out.go:177] * Restarting existing kvm2 VM for "multinode-773885" ...
I0223 22:21:13.365638 80620 main.go:141] libmachine: (multinode-773885) Calling .Start
I0223 22:21:13.365789 80620 main.go:141] libmachine: (multinode-773885) Ensuring networks are active...
I0223 22:21:13.366413 80620 main.go:141] libmachine: (multinode-773885) Ensuring network default is active
I0223 22:21:13.366726 80620 main.go:141] libmachine: (multinode-773885) Ensuring network mk-multinode-773885 is active
I0223 22:21:13.367088 80620 main.go:141] libmachine: (multinode-773885) Getting domain xml...
I0223 22:21:13.367766 80620 main.go:141] libmachine: (multinode-773885) Creating domain...
I0223 22:21:14.564410 80620 main.go:141] libmachine: (multinode-773885) Waiting to get IP...
I0223 22:21:14.565318 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:14.565709 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:14.565811 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:14.565729 80650 retry.go:31] will retry after 216.926568ms: waiting for machine to come up
I0223 22:21:14.784224 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:14.784682 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:14.784711 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:14.784633 80650 retry.go:31] will retry after 249.246042ms: waiting for machine to come up
I0223 22:21:15.035098 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:15.035423 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:15.035451 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.035397 80650 retry.go:31] will retry after 334.153469ms: waiting for machine to come up
I0223 22:21:15.370820 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:15.371326 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:15.371360 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.371252 80650 retry.go:31] will retry after 394.396319ms: waiting for machine to come up
I0223 22:21:15.766773 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:15.767259 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:15.767292 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:15.767204 80650 retry.go:31] will retry after 580.71112ms: waiting for machine to come up
I0223 22:21:16.350049 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:16.350438 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:16.350468 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:16.350387 80650 retry.go:31] will retry after 812.475241ms: waiting for machine to come up
I0223 22:21:17.164302 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:17.164761 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:17.164794 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:17.164713 80650 retry.go:31] will retry after 1.090615613s: waiting for machine to come up
I0223 22:21:18.257489 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:18.257882 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:18.257949 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:18.257850 80650 retry.go:31] will retry after 1.207436911s: waiting for machine to come up
I0223 22:21:19.467391 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:19.467804 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:19.467836 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:19.467758 80650 retry.go:31] will retry after 1.522373862s: waiting for machine to come up
I0223 22:21:20.992569 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:20.992936 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:20.992965 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:20.992883 80650 retry.go:31] will retry after 2.133891724s: waiting for machine to come up
I0223 22:21:23.129156 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:23.129626 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:23.129648 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:23.129597 80650 retry.go:31] will retry after 2.398257467s: waiting for machine to come up
I0223 22:21:25.529031 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:25.529472 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:25.529508 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:25.529418 80650 retry.go:31] will retry after 2.616816039s: waiting for machine to come up
I0223 22:21:28.149307 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:28.149703 80620 main.go:141] libmachine: (multinode-773885) DBG | unable to find current IP address of domain multinode-773885 in network mk-multinode-773885
I0223 22:21:28.149732 80620 main.go:141] libmachine: (multinode-773885) DBG | I0223 22:21:28.149668 80650 retry.go:31] will retry after 3.093858159s: waiting for machine to come up
I0223 22:21:31.245491 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.245970 80620 main.go:141] libmachine: (multinode-773885) Found IP for machine: 192.168.39.240
I0223 22:21:31.245992 80620 main.go:141] libmachine: (multinode-773885) Reserving static IP address...
I0223 22:21:31.246035 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has current primary IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.246498 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "multinode-773885", mac: "52:54:00:77:a9:85", ip: "192.168.39.240"} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.246523 80620 main.go:141] libmachine: (multinode-773885) DBG | skip adding static IP to network mk-multinode-773885 - found existing host DHCP lease matching {name: "multinode-773885", mac: "52:54:00:77:a9:85", ip: "192.168.39.240"}
I0223 22:21:31.246531 80620 main.go:141] libmachine: (multinode-773885) Reserved static IP address: 192.168.39.240
I0223 22:21:31.246540 80620 main.go:141] libmachine: (multinode-773885) Waiting for SSH to be available...
I0223 22:21:31.246549 80620 main.go:141] libmachine: (multinode-773885) DBG | Getting to WaitForSSH function...
I0223 22:21:31.248477 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.248821 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.248848 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.248945 80620 main.go:141] libmachine: (multinode-773885) DBG | Using SSH client type: external
I0223 22:21:31.248970 80620 main.go:141] libmachine: (multinode-773885) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa (-rw-------)
I0223 22:21:31.249043 80620 main.go:141] libmachine: (multinode-773885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa -p 22] /usr/bin/ssh <nil>}
I0223 22:21:31.249076 80620 main.go:141] libmachine: (multinode-773885) DBG | About to run SSH command:
I0223 22:21:31.249094 80620 main.go:141] libmachine: (multinode-773885) DBG | exit 0
I0223 22:21:31.338971 80620 main.go:141] libmachine: (multinode-773885) DBG | SSH cmd err, output: <nil>:
I0223 22:21:31.339315 80620 main.go:141] libmachine: (multinode-773885) Calling .GetConfigRaw
I0223 22:21:31.339952 80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
I0223 22:21:31.342708 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.343091 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.343112 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.343382 80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
I0223 22:21:31.343587 80620 machine.go:88] provisioning docker machine ...
I0223 22:21:31.343612 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:31.343856 80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
I0223 22:21:31.344026 80620 buildroot.go:166] provisioning hostname "multinode-773885"
I0223 22:21:31.344045 80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
I0223 22:21:31.344189 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:31.346343 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.346741 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.346772 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.346912 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:31.347101 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.347235 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.347362 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:31.347563 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:31.347987 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:31.348001 80620 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-773885 && echo "multinode-773885" | sudo tee /etc/hostname
I0223 22:21:31.483698 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773885
I0223 22:21:31.483729 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:31.486353 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.486705 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.486729 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.486927 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:31.487146 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.487349 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.487567 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:31.487765 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:31.488223 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:31.488247 80620 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-773885' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773885/g' /etc/hosts;
else
echo '127.0.1.1 multinode-773885' | sudo tee -a /etc/hosts;
fi
fi
I0223 22:21:31.610531 80620 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 22:21:31.610563 80620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-59858/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-59858/.minikube}
I0223 22:21:31.610579 80620 buildroot.go:174] setting up certificates
I0223 22:21:31.610589 80620 provision.go:83] configureAuth start
I0223 22:21:31.610602 80620 main.go:141] libmachine: (multinode-773885) Calling .GetMachineName
I0223 22:21:31.610887 80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
I0223 22:21:31.613554 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.613875 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.613901 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.614087 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:31.616271 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.616732 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.616766 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.616828 80620 provision.go:138] copyHostCerts
I0223 22:21:31.616880 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
I0223 22:21:31.616925 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem, removing ...
I0223 22:21:31.616938 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
I0223 22:21:31.617049 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem (1078 bytes)
I0223 22:21:31.617142 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
I0223 22:21:31.617171 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem, removing ...
I0223 22:21:31.617182 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
I0223 22:21:31.617225 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem (1123 bytes)
I0223 22:21:31.617338 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
I0223 22:21:31.617367 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem, removing ...
I0223 22:21:31.617373 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
I0223 22:21:31.617412 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem (1671 bytes)
I0223 22:21:31.617475 80620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem org=jenkins.multinode-773885 san=[192.168.39.240 192.168.39.240 localhost 127.0.0.1 minikube multinode-773885]
I0223 22:21:31.813280 80620 provision.go:172] copyRemoteCerts
I0223 22:21:31.813353 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0223 22:21:31.813402 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:31.816285 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.816679 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.816716 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.816918 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:31.817162 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.817351 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:31.817481 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
I0223 22:21:31.903913 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0223 22:21:31.904023 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0223 22:21:31.928843 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem -> /etc/docker/server.pem
I0223 22:21:31.928908 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0223 22:21:31.953083 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0223 22:21:31.953136 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0223 22:21:31.977825 80620 provision.go:86] duration metric: configureAuth took 367.222576ms
I0223 22:21:31.977848 80620 buildroot.go:189] setting minikube options for container-runtime
I0223 22:21:31.978069 80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 22:21:31.978096 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:31.978344 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:31.980808 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.981196 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:31.981226 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:31.981404 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:31.981631 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.981794 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:31.981903 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:31.982052 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:31.982469 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:31.982488 80620 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0223 22:21:32.100345 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0223 22:21:32.100366 80620 buildroot.go:70] root file system type: tmpfs
I0223 22:21:32.100467 80620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0223 22:21:32.100489 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:32.103003 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:32.103407 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:32.103436 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:32.103637 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:32.103824 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:32.103965 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:32.104148 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:32.104371 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:32.104858 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:32.104953 80620 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0223 22:21:32.237312 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0223 22:21:32.237343 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:32.240081 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:32.240430 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:32.240481 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:32.240599 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:32.240764 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:32.240928 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:32.241022 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:32.241158 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:32.241558 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:32.241575 80620 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0223 22:21:33.112176 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0223 22:21:33.112206 80620 machine.go:91] provisioned docker machine in 1.76860164s
I0223 22:21:33.112216 80620 start.go:300] post-start starting for "multinode-773885" (driver="kvm2")
I0223 22:21:33.112222 80620 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0223 22:21:33.112238 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:33.112595 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0223 22:21:33.112636 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:33.115711 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.116122 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:33.116159 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.116274 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:33.116476 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:33.116715 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:33.116933 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
I0223 22:21:33.204860 80620 ssh_runner.go:195] Run: cat /etc/os-release
I0223 22:21:33.208799 80620 command_runner.go:130] > NAME=Buildroot
I0223 22:21:33.208819 80620 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
I0223 22:21:33.208823 80620 command_runner.go:130] > ID=buildroot
I0223 22:21:33.208829 80620 command_runner.go:130] > VERSION_ID=2021.02.12
I0223 22:21:33.208833 80620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0223 22:21:33.208858 80620 info.go:137] Remote host: Buildroot 2021.02.12
I0223 22:21:33.208867 80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/addons for local assets ...
I0223 22:21:33.208924 80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/files for local assets ...
I0223 22:21:33.208996 80620 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> 669272.pem in /etc/ssl/certs
I0223 22:21:33.209017 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /etc/ssl/certs/669272.pem
I0223 22:21:33.209096 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0223 22:21:33.216834 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /etc/ssl/certs/669272.pem (1708 bytes)
I0223 22:21:33.238598 80620 start.go:303] post-start completed in 126.369412ms
I0223 22:21:33.238618 80620 fix.go:57] fixHost completed within 19.892701007s
I0223 22:21:33.238638 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:33.241628 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.242000 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:33.242020 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.242184 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:33.242377 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:33.242544 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:33.242697 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:33.242867 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:21:33.243253 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.240 22 <nil> <nil>}
I0223 22:21:33.243264 80620 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0223 22:21:33.359558 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677190893.310436860
I0223 22:21:33.359587 80620 fix.go:207] guest clock: 1677190893.310436860
I0223 22:21:33.359596 80620 fix.go:220] Guest: 2023-02-23 22:21:33.31043686 +0000 UTC Remote: 2023-02-23 22:21:33.238622371 +0000 UTC m=+20.014549698 (delta=71.814489ms)
I0223 22:21:33.359621 80620 fix.go:191] guest clock delta is within tolerance: 71.814489ms
I0223 22:21:33.359628 80620 start.go:83] releasing machines lock for "multinode-773885", held for 20.013722401s
I0223 22:21:33.359654 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:33.359925 80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
I0223 22:21:33.362448 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.362830 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:33.362872 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.362979 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:33.363495 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:33.363673 80620 main.go:141] libmachine: (multinode-773885) Calling .DriverName
I0223 22:21:33.363761 80620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0223 22:21:33.363798 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:33.363978 80620 ssh_runner.go:195] Run: cat /version.json
I0223 22:21:33.364008 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHHostname
I0223 22:21:33.366567 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.366853 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.366894 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:33.366918 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.367103 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:33.367284 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:33.367338 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:33.367363 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:33.367483 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:33.367511 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHPort
I0223 22:21:33.367637 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
I0223 22:21:33.367796 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHKeyPath
I0223 22:21:33.367946 80620 main.go:141] libmachine: (multinode-773885) Calling .GetSSHUsername
I0223 22:21:33.368088 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885/id_rsa Username:docker}
I0223 22:21:33.472525 80620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0223 22:21:33.472587 80620 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1675980448-15752", "minikube_version": "v1.29.0", "commit": "cf7ad99382c4b89a2ffa286b1101797332265ce3"}
I0223 22:21:33.472717 80620 ssh_runner.go:195] Run: systemctl --version
I0223 22:21:33.478170 80620 command_runner.go:130] > systemd 247 (247)
I0223 22:21:33.478214 80620 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I0223 22:21:33.478449 80620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0223 22:21:33.483322 80620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0223 22:21:33.483517 80620 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0223 22:21:33.483559 80620 ssh_runner.go:195] Run: which cri-dockerd
I0223 22:21:33.486877 80620 command_runner.go:130] > /usr/bin/cri-dockerd
I0223 22:21:33.486963 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0223 22:21:33.494937 80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0223 22:21:33.509789 80620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0223 22:21:33.522704 80620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0223 22:21:33.523037 80620 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0223 22:21:33.523053 80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 22:21:33.523114 80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 22:21:33.547334 80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0223 22:21:33.547357 80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0223 22:21:33.547366 80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0223 22:21:33.547373 80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0223 22:21:33.547379 80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0223 22:21:33.547386 80620 command_runner.go:130] > registry.k8s.io/pause:3.9
I0223 22:21:33.547393 80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
I0223 22:21:33.547402 80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0223 22:21:33.547409 80620 command_runner.go:130] > registry.k8s.io/pause:3.6
I0223 22:21:33.547429 80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0223 22:21:33.547437 80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0223 22:21:33.548840 80620 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0223 22:21:33.548856 80620 docker.go:560] Images already preloaded, skipping extraction
I0223 22:21:33.548865 80620 start.go:485] detecting cgroup driver to use...
I0223 22:21:33.548962 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 22:21:33.565249 80620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0223 22:21:33.565271 80620 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0223 22:21:33.565339 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0223 22:21:33.574475 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0223 22:21:33.582936 80620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0223 22:21:33.582977 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0223 22:21:33.591609 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 22:21:33.600301 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0223 22:21:33.608920 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 22:21:33.617470 80620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0223 22:21:33.626224 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0223 22:21:33.634536 80620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0223 22:21:33.642631 80620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0223 22:21:33.642679 80620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0223 22:21:33.650322 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:21:33.748276 80620 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 22:21:33.765231 80620 start.go:485] detecting cgroup driver to use...
I0223 22:21:33.765298 80620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0223 22:21:33.783055 80620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0223 22:21:33.783552 80620 command_runner.go:130] > [Unit]
I0223 22:21:33.783568 80620 command_runner.go:130] > Description=Docker Application Container Engine
I0223 22:21:33.783574 80620 command_runner.go:130] > Documentation=https://docs.docker.com
I0223 22:21:33.783579 80620 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0223 22:21:33.783584 80620 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0223 22:21:33.783589 80620 command_runner.go:130] > StartLimitBurst=3
I0223 22:21:33.783595 80620 command_runner.go:130] > StartLimitIntervalSec=60
I0223 22:21:33.783598 80620 command_runner.go:130] > [Service]
I0223 22:21:33.783603 80620 command_runner.go:130] > Type=notify
I0223 22:21:33.783607 80620 command_runner.go:130] > Restart=on-failure
I0223 22:21:33.783614 80620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0223 22:21:33.783625 80620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0223 22:21:33.783631 80620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0223 22:21:33.783640 80620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0223 22:21:33.783647 80620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0223 22:21:33.783653 80620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0223 22:21:33.783660 80620 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0223 22:21:33.783668 80620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0223 22:21:33.783674 80620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0223 22:21:33.783678 80620 command_runner.go:130] > ExecStart=
I0223 22:21:33.783691 80620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0223 22:21:33.783696 80620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0223 22:21:33.783702 80620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0223 22:21:33.783708 80620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0223 22:21:33.783712 80620 command_runner.go:130] > LimitNOFILE=infinity
I0223 22:21:33.783715 80620 command_runner.go:130] > LimitNPROC=infinity
I0223 22:21:33.783719 80620 command_runner.go:130] > LimitCORE=infinity
I0223 22:21:33.783724 80620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0223 22:21:33.783728 80620 command_runner.go:130] > # Only systemd 226 and above support this version.
I0223 22:21:33.783733 80620 command_runner.go:130] > TasksMax=infinity
I0223 22:21:33.783736 80620 command_runner.go:130] > TimeoutStartSec=0
I0223 22:21:33.783742 80620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0223 22:21:33.783746 80620 command_runner.go:130] > Delegate=yes
I0223 22:21:33.783751 80620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0223 22:21:33.783755 80620 command_runner.go:130] > KillMode=process
I0223 22:21:33.783758 80620 command_runner.go:130] > [Install]
I0223 22:21:33.783765 80620 command_runner.go:130] > WantedBy=multi-user.target
I0223 22:21:33.784203 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0223 22:21:33.800310 80620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0223 22:21:33.820089 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0223 22:21:33.831934 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 22:21:33.843320 80620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0223 22:21:33.870509 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 22:21:33.882768 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 22:21:33.898405 80620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0223 22:21:33.898433 80620 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0223 22:21:33.898700 80620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0223 22:21:33.998916 80620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0223 22:21:34.101490 80620 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0223 22:21:34.101526 80620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0223 22:21:34.117559 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:21:34.221898 80620 ssh_runner.go:195] Run: sudo systemctl restart docker
I0223 22:21:35.643194 80620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.421256026s)
I0223 22:21:35.643291 80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0223 22:21:35.759716 80620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0223 22:21:35.863224 80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0223 22:21:35.965951 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:21:36.072240 80620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0223 22:21:36.092427 80620 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0223 22:21:36.092508 80620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0223 22:21:36.104108 80620 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0223 22:21:36.104128 80620 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0223 22:21:36.104134 80620 command_runner.go:130] > Device: 16h/22d Inode: 814 Links: 1
I0223 22:21:36.104143 80620 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0223 22:21:36.104156 80620 command_runner.go:130] > Access: 2023-02-23 22:21:36.038985633 +0000
I0223 22:21:36.104168 80620 command_runner.go:130] > Modify: 2023-02-23 22:21:36.038985633 +0000
I0223 22:21:36.104180 80620 command_runner.go:130] > Change: 2023-02-23 22:21:36.041985633 +0000
I0223 22:21:36.104189 80620 command_runner.go:130] > Birth: -
I0223 22:21:36.104213 80620 start.go:553] Will wait 60s for crictl version
I0223 22:21:36.104260 80620 ssh_runner.go:195] Run: which crictl
I0223 22:21:36.110223 80620 command_runner.go:130] > /usr/bin/crictl
I0223 22:21:36.110588 80620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0223 22:21:36.185549 80620 command_runner.go:130] > Version: 0.1.0
I0223 22:21:36.185577 80620 command_runner.go:130] > RuntimeName: docker
I0223 22:21:36.185585 80620 command_runner.go:130] > RuntimeVersion: 20.10.23
I0223 22:21:36.185593 80620 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0223 22:21:36.185626 80620 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0223 22:21:36.185698 80620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 22:21:36.217919 80620 command_runner.go:130] > 20.10.23
I0223 22:21:36.219196 80620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 22:21:36.248973 80620 command_runner.go:130] > 20.10.23
I0223 22:21:36.253095 80620 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0223 22:21:36.253136 80620 main.go:141] libmachine: (multinode-773885) Calling .GetIP
I0223 22:21:36.255830 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:36.256233 80620 main.go:141] libmachine: (multinode-773885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:a9:85", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:21:25 +0000 UTC Type:0 Mac:52:54:00:77:a9:85 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-773885 Clientid:01:52:54:00:77:a9:85}
I0223 22:21:36.256260 80620 main.go:141] libmachine: (multinode-773885) DBG | domain multinode-773885 has defined IP address 192.168.39.240 and MAC address 52:54:00:77:a9:85 in network mk-multinode-773885
I0223 22:21:36.256492 80620 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0223 22:21:36.260126 80620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 22:21:36.272218 80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 22:21:36.272269 80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 22:21:36.294497 80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0223 22:21:36.294518 80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0223 22:21:36.294523 80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0223 22:21:36.294528 80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0223 22:21:36.294532 80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0223 22:21:36.294536 80620 command_runner.go:130] > registry.k8s.io/pause:3.9
I0223 22:21:36.294541 80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
I0223 22:21:36.294546 80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0223 22:21:36.294550 80620 command_runner.go:130] > registry.k8s.io/pause:3.6
I0223 22:21:36.294554 80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0223 22:21:36.294558 80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0223 22:21:36.295537 80620 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0223 22:21:36.295553 80620 docker.go:560] Images already preloaded, skipping extraction
I0223 22:21:36.295600 80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 22:21:36.317087 80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0223 22:21:36.317104 80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0223 22:21:36.317109 80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0223 22:21:36.317114 80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0223 22:21:36.317119 80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0223 22:21:36.317123 80620 command_runner.go:130] > registry.k8s.io/pause:3.9
I0223 22:21:36.317127 80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
I0223 22:21:36.317133 80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0223 22:21:36.317137 80620 command_runner.go:130] > registry.k8s.io/pause:3.6
I0223 22:21:36.317142 80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0223 22:21:36.317149 80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0223 22:21:36.318116 80620 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0223 22:21:36.318131 80620 cache_images.go:84] Images are preloaded, skipping loading
I0223 22:21:36.318198 80620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0223 22:21:36.351288 80620 command_runner.go:130] > cgroupfs
I0223 22:21:36.352347 80620 cni.go:84] Creating CNI manager for ""
I0223 22:21:36.352366 80620 cni.go:136] 3 nodes found, recommending kindnet
I0223 22:21:36.352384 80620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0223 22:21:36.352404 80620 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-773885 NodeName:multinode-773885 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0223 22:21:36.352535 80620 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.240
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-773885"
kubeletExtraArgs:
node-ip: 192.168.39.240
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0223 22:21:36.352608 80620 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-773885 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0223 22:21:36.352654 80620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0223 22:21:36.361734 80620 command_runner.go:130] > kubeadm
I0223 22:21:36.361745 80620 command_runner.go:130] > kubectl
I0223 22:21:36.361749 80620 command_runner.go:130] > kubelet
I0223 22:21:36.361984 80620 binaries.go:44] Found k8s binaries, skipping transfer
I0223 22:21:36.362045 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0223 22:21:36.369631 80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
I0223 22:21:36.384815 80620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0223 22:21:36.399471 80620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
I0223 22:21:36.414791 80620 ssh_runner.go:195] Run: grep 192.168.39.240 control-plane.minikube.internal$ /etc/hosts
I0223 22:21:36.418133 80620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.240 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 22:21:36.429567 80620 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885 for IP: 192.168.39.240
I0223 22:21:36.429596 80620 certs.go:186] acquiring lock for shared ca certs: {Name:mkb47a35d7b33f6ba829c92dc16cfaf70cb716c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 22:21:36.429732 80620 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key
I0223 22:21:36.429768 80620 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key
I0223 22:21:36.429863 80620 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key
I0223 22:21:36.429933 80620 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key.ac2ca5a7
I0223 22:21:36.429971 80620 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key
I0223 22:21:36.429982 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0223 22:21:36.429999 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0223 22:21:36.430009 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0223 22:21:36.430023 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0223 22:21:36.430035 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0223 22:21:36.430047 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0223 22:21:36.430058 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0223 22:21:36.430070 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0223 22:21:36.430120 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem (1338 bytes)
W0223 22:21:36.430145 80620 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927_empty.pem, impossibly tiny 0 bytes
I0223 22:21:36.430155 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem (1675 bytes)
I0223 22:21:36.430178 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem (1078 bytes)
I0223 22:21:36.430200 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem (1123 bytes)
I0223 22:21:36.430224 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem (1671 bytes)
I0223 22:21:36.430265 80620 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem (1708 bytes)
I0223 22:21:36.430293 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /usr/share/ca-certificates/669272.pem
I0223 22:21:36.430307 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0223 22:21:36.430319 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem -> /usr/share/ca-certificates/66927.pem
I0223 22:21:36.430835 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0223 22:21:36.452666 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0223 22:21:36.474354 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0223 22:21:36.496347 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0223 22:21:36.518192 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0223 22:21:36.539742 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0223 22:21:36.561567 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0223 22:21:36.582936 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0223 22:21:36.605667 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /usr/share/ca-certificates/669272.pem (1708 bytes)
I0223 22:21:36.627349 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0223 22:21:36.649138 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/66927.pem --> /usr/share/ca-certificates/66927.pem (1338 bytes)
I0223 22:21:36.670645 80620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0223 22:21:36.685674 80620 ssh_runner.go:195] Run: openssl version
I0223 22:21:36.690629 80620 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0223 22:21:36.690924 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/66927.pem && ln -fs /usr/share/ca-certificates/66927.pem /etc/ssl/certs/66927.pem"
I0223 22:21:36.699754 80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/66927.pem
I0223 22:21:36.703759 80620 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/66927.pem
I0223 22:21:36.704095 80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/66927.pem
I0223 22:21:36.704128 80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/66927.pem
I0223 22:21:36.709182 80620 command_runner.go:130] > 51391683
I0223 22:21:36.709238 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/66927.pem /etc/ssl/certs/51391683.0"
I0223 22:21:36.718122 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669272.pem && ln -fs /usr/share/ca-certificates/669272.pem /etc/ssl/certs/669272.pem"
I0223 22:21:36.726789 80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669272.pem
I0223 22:21:36.730766 80620 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/669272.pem
I0223 22:21:36.730841 80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/669272.pem
I0223 22:21:36.730885 80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669272.pem
I0223 22:21:36.735795 80620 command_runner.go:130] > 3ec20f2e
I0223 22:21:36.736176 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/669272.pem /etc/ssl/certs/3ec20f2e.0"
I0223 22:21:36.745026 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0223 22:21:36.753682 80620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0223 22:21:36.757609 80620 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
I0223 22:21:36.757830 80620 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
I0223 22:21:36.757864 80620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0223 22:21:36.762876 80620 command_runner.go:130] > b5213941
I0223 22:21:36.762930 80620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0223 22:21:36.771746 80620 kubeadm.go:401] StartCluster: {Name:multinode-773885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.26.1 ClusterName:multinode-773885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMn
etPath: StaticIP:}
I0223 22:21:36.771889 80620 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0223 22:21:36.795673 80620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0223 22:21:36.804158 80620 command_runner.go:130] > /var/lib/kubelet/config.yaml
I0223 22:21:36.804177 80620 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
I0223 22:21:36.804208 80620 command_runner.go:130] > /var/lib/minikube/etcd:
I0223 22:21:36.804223 80620 command_runner.go:130] > member
I0223 22:21:36.804253 80620 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0223 22:21:36.804270 80620 kubeadm.go:633] restartCluster start
I0223 22:21:36.804326 80620 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0223 22:21:36.812345 80620 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0223 22:21:36.812718 80620 kubeconfig.go:135] verify returned: extract IP: "multinode-773885" does not appear in /home/jenkins/minikube-integration/15909-59858/kubeconfig
I0223 22:21:36.812798 80620 kubeconfig.go:146] "multinode-773885" context is missing from /home/jenkins/minikube-integration/15909-59858/kubeconfig - will repair!
I0223 22:21:36.813094 80620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-59858/kubeconfig: {Name:mkb3ee8537c1c29485268d18a34139db6a7d5ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 22:21:36.813506 80620 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15909-59858/kubeconfig
I0223 22:21:36.813719 80620 kapi.go:59] client config for multinode-773885: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key", CAFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 22:21:36.814424 80620 cert_rotation.go:137] Starting client certificate rotation controller
I0223 22:21:36.814616 80620 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0223 22:21:36.822391 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:36.822434 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:36.832386 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:37.333153 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:37.333231 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:37.344298 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:37.832833 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:37.832931 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:37.843863 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:38.333039 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:38.333157 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:38.344397 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:38.833335 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:38.833418 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:38.844307 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:39.332585 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:39.332660 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:39.343665 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:39.833274 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:39.833358 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:39.844484 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:40.332983 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:40.333065 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:40.344099 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:40.832657 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:40.832750 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:40.843615 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:41.333154 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:41.333245 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:41.344059 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:41.832619 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:41.832703 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:41.843654 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:42.333248 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:42.333328 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:42.344533 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:42.833157 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:42.833256 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:42.843975 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:43.333351 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:43.333418 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:43.344740 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:43.832562 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:43.832672 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:43.843659 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:44.333327 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:44.333407 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:44.344578 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:44.833173 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:44.833245 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:44.844332 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:45.332909 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:45.333037 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:45.344107 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:45.832647 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:45.832732 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:45.843986 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:46.332538 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:46.332617 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:46.343428 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:46.833367 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:46.833455 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:46.844521 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:46.844541 80620 api_server.go:165] Checking apiserver status ...
I0223 22:21:46.844582 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 22:21:46.854411 80620 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 22:21:46.854446 80620 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0223 22:21:46.854455 80620 kubeadm.go:1120] stopping kube-system containers ...
I0223 22:21:46.854520 80620 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0223 22:21:46.882631 80620 command_runner.go:130] > a31cf43457e0
I0223 22:21:46.882655 80620 command_runner.go:130] > b83daa4cdd8d
I0223 22:21:46.882661 80620 command_runner.go:130] > 75e472928e30
I0223 22:21:46.882666 80620 command_runner.go:130] > 20f2e353f8d4
I0223 22:21:46.882674 80620 command_runner.go:130] > f6b2b873cba9
I0223 22:21:46.882682 80620 command_runner.go:130] > 6becaf5c8640
I0223 22:21:46.882688 80620 command_runner.go:130] > a2a9a29b5a41
I0223 22:21:46.882694 80620 command_runner.go:130] > f284ce294fa0
I0223 22:21:46.882700 80620 command_runner.go:130] > 8d29ee663e61
I0223 22:21:46.882707 80620 command_runner.go:130] > baad115b76c6
I0223 22:21:46.882725 80620 command_runner.go:130] > 53723346fe3c
I0223 22:21:46.882735 80620 command_runner.go:130] > 6a41aad93299
I0223 22:21:46.882743 80620 command_runner.go:130] > 745d6ec7adf4
I0223 22:21:46.882750 80620 command_runner.go:130] > 979e703c6176
I0223 22:21:46.882757 80620 command_runner.go:130] > 3b6e6d975efa
I0223 22:21:46.882766 80620 command_runner.go:130] > 072b5f08a10f
I0223 22:21:46.882797 80620 docker.go:456] Stopping containers: [a31cf43457e0 b83daa4cdd8d 75e472928e30 20f2e353f8d4 f6b2b873cba9 6becaf5c8640 a2a9a29b5a41 f284ce294fa0 8d29ee663e61 baad115b76c6 53723346fe3c 6a41aad93299 745d6ec7adf4 979e703c6176 3b6e6d975efa 072b5f08a10f]
I0223 22:21:46.882868 80620 ssh_runner.go:195] Run: docker stop a31cf43457e0 b83daa4cdd8d 75e472928e30 20f2e353f8d4 f6b2b873cba9 6becaf5c8640 a2a9a29b5a41 f284ce294fa0 8d29ee663e61 baad115b76c6 53723346fe3c 6a41aad93299 745d6ec7adf4 979e703c6176 3b6e6d975efa 072b5f08a10f
I0223 22:21:46.908823 80620 command_runner.go:130] > a31cf43457e0
I0223 22:21:46.908844 80620 command_runner.go:130] > b83daa4cdd8d
I0223 22:21:46.908853 80620 command_runner.go:130] > 75e472928e30
I0223 22:21:46.908858 80620 command_runner.go:130] > 20f2e353f8d4
I0223 22:21:46.908865 80620 command_runner.go:130] > f6b2b873cba9
I0223 22:21:46.908870 80620 command_runner.go:130] > 6becaf5c8640
I0223 22:21:46.908876 80620 command_runner.go:130] > a2a9a29b5a41
I0223 22:21:46.909404 80620 command_runner.go:130] > f284ce294fa0
I0223 22:21:46.909419 80620 command_runner.go:130] > 8d29ee663e61
I0223 22:21:46.909424 80620 command_runner.go:130] > baad115b76c6
I0223 22:21:46.909441 80620 command_runner.go:130] > 53723346fe3c
I0223 22:21:46.909828 80620 command_runner.go:130] > 6a41aad93299
I0223 22:21:46.909847 80620 command_runner.go:130] > 745d6ec7adf4
I0223 22:21:46.909853 80620 command_runner.go:130] > 979e703c6176
I0223 22:21:46.909858 80620 command_runner.go:130] > 3b6e6d975efa
I0223 22:21:46.909864 80620 command_runner.go:130] > 072b5f08a10f
I0223 22:21:46.911025 80620 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0223 22:21:46.925825 80620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 22:21:46.933780 80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0223 22:21:46.933807 80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0223 22:21:46.933818 80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0223 22:21:46.933842 80620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 22:21:46.934068 80620 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 22:21:46.934127 80620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0223 22:21:46.942292 80620 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0223 22:21:46.942311 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:47.060140 80620 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0223 22:21:47.060421 80620 command_runner.go:130] > [certs] Using existing ca certificate authority
I0223 22:21:47.060722 80620 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0223 22:21:47.061266 80620 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0223 22:21:47.061579 80620 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
I0223 22:21:47.062097 80620 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
I0223 22:21:47.062730 80620 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
I0223 22:21:47.063273 80620 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
I0223 22:21:47.063668 80620 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
I0223 22:21:47.064166 80620 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0223 22:21:47.064500 80620 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I0223 22:21:47.064789 80620 command_runner.go:130] > [certs] Using the existing "sa" key
I0223 22:21:47.066082 80620 command_runner.go:130] ! W0223 22:21:47.003599 1259 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:47.066190 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:47.118462 80620 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 22:21:47.207705 80620 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0223 22:21:47.310176 80620 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 22:21:47.491530 80620 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 22:21:47.570853 80620 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 22:21:47.573364 80620 command_runner.go:130] ! W0223 22:21:47.061082 1265 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:47.573502 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:47.637325 80620 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0223 22:21:47.638644 80620 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0223 22:21:47.638664 80620 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0223 22:21:47.751602 80620 command_runner.go:130] ! W0223 22:21:47.567753 1271 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:47.751640 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:47.811937 80620 command_runner.go:130] ! W0223 22:21:47.761774 1293 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:47.829349 80620 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0223 22:21:47.829375 80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 22:21:47.829384 80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 22:21:47.829392 80620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 22:21:47.829573 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:47.919203 80620 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 22:21:47.922916 80620 command_runner.go:130] ! W0223 22:21:47.858650 1302 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:47.923089 80620 api_server.go:51] waiting for apiserver process to appear ...
I0223 22:21:47.923171 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:48.438055 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:48.938524 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:49.437773 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:49.938504 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:50.438625 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:21:50.455679 80620 command_runner.go:130] > 1675
I0223 22:21:50.456038 80620 api_server.go:71] duration metric: took 2.532952682s to wait for apiserver process to appear ...
I0223 22:21:50.456061 80620 api_server.go:87] waiting for apiserver healthz status ...
I0223 22:21:50.456073 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:50.456563 80620 api_server.go:268] stopped: https://192.168.39.240:8443/healthz: Get "https://192.168.39.240:8443/healthz": dial tcp 192.168.39.240:8443: connect: connection refused
I0223 22:21:50.957285 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:53.851413 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0223 22:21:53.851440 80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0223 22:21:53.957622 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:53.962959 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 22:21:53.962996 80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 22:21:54.457567 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:54.462593 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 22:21:54.462613 80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 22:21:54.957140 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:54.975573 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 22:21:54.975619 80620 api_server.go:102] status: https://192.168.39.240:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 22:21:55.457159 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:21:55.468052 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
ok
I0223 22:21:55.468134 80620 round_trippers.go:463] GET https://192.168.39.240:8443/version
I0223 22:21:55.468145 80620 round_trippers.go:469] Request Headers:
I0223 22:21:55.468159 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:55.468173 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:55.478605 80620 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I0223 22:21:55.478631 80620 round_trippers.go:577] Response Headers:
I0223 22:21:55.478639 80620 round_trippers.go:580] Content-Length: 263
I0223 22:21:55.478645 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:55 GMT
I0223 22:21:55.478651 80620 round_trippers.go:580] Audit-Id: 0e80152b-56d5-4ba7-8d3d-ebf4ef092ec4
I0223 22:21:55.478656 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:55.478661 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:55.478667 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:55.478677 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:55.478720 80620 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.1",
"gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
"gitTreeState": "clean",
"buildDate": "2023-01-18T15:51:25Z",
"goVersion": "go1.19.5",
"compiler": "gc",
"platform": "linux/amd64"
}
I0223 22:21:55.478820 80620 api_server.go:140] control plane version: v1.26.1
I0223 22:21:55.478837 80620 api_server.go:130] duration metric: took 5.022769855s to wait for apiserver health ...
I0223 22:21:55.478847 80620 cni.go:84] Creating CNI manager for ""
I0223 22:21:55.478864 80620 cni.go:136] 3 nodes found, recommending kindnet
I0223 22:21:55.481215 80620 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0223 22:21:55.482654 80620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0223 22:21:55.487827 80620 command_runner.go:130] > File: /opt/cni/bin/portmap
I0223 22:21:55.487850 80620 command_runner.go:130] > Size: 2798344 Blocks: 5472 IO Block: 4096 regular file
I0223 22:21:55.487860 80620 command_runner.go:130] > Device: 11h/17d Inode: 3542 Links: 1
I0223 22:21:55.487870 80620 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0223 22:21:55.487881 80620 command_runner.go:130] > Access: 2023-02-23 22:21:25.431985633 +0000
I0223 22:21:55.487897 80620 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
I0223 22:21:55.487905 80620 command_runner.go:130] > Change: 2023-02-23 22:21:23.668985633 +0000
I0223 22:21:55.487910 80620 command_runner.go:130] > Birth: -
I0223 22:21:55.488315 80620 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
I0223 22:21:55.488335 80620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0223 22:21:55.519404 80620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0223 22:21:56.635297 80620 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0223 22:21:56.642116 80620 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0223 22:21:56.645709 80620 command_runner.go:130] > serviceaccount/kindnet unchanged
I0223 22:21:56.664280 80620 command_runner.go:130] > daemonset.apps/kindnet configured
I0223 22:21:56.666573 80620 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.147136699s)
I0223 22:21:56.666612 80620 system_pods.go:43] waiting for kube-system pods to appear ...
I0223 22:21:56.666717 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:21:56.666728 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.666739 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.666748 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.670034 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:21:56.670049 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.670056 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.670062 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.670081 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.670087 80620 round_trippers.go:580] Audit-Id: 03e54a77-0840-4896-9a52-5cdd73109000
I0223 22:21:56.670100 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.670111 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.671358 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"742"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"408","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82574 chars]
I0223 22:21:56.675255 80620 system_pods.go:59] 12 kube-system pods found
I0223 22:21:56.675279 80620 system_pods.go:61] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
I0223 22:21:56.675286 80620 system_pods.go:61] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0223 22:21:56.675291 80620 system_pods.go:61] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
I0223 22:21:56.675295 80620 system_pods.go:61] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
I0223 22:21:56.675316 80620 system_pods.go:61] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
I0223 22:21:56.675325 80620 system_pods.go:61] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
I0223 22:21:56.675337 80620 system_pods.go:61] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0223 22:21:56.675345 80620 system_pods.go:61] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
I0223 22:21:56.675349 80620 system_pods.go:61] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
I0223 22:21:56.675356 80620 system_pods.go:61] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
I0223 22:21:56.675361 80620 system_pods.go:61] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0223 22:21:56.675367 80620 system_pods.go:61] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
I0223 22:21:56.675372 80620 system_pods.go:74] duration metric: took 8.754325ms to wait for pod list to return data ...
I0223 22:21:56.675385 80620 node_conditions.go:102] verifying NodePressure condition ...
I0223 22:21:56.675430 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
I0223 22:21:56.675437 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.675444 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.675451 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.680543 80620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0223 22:21:56.680557 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.680564 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.680569 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.680577 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.680582 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.680589 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.680597 80620 round_trippers.go:580] Audit-Id: e86d112e-250e-4963-a6fb-b8fd3c902f59
I0223 22:21:56.681128 80620 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"742"},"items":[{"metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16319 chars]
I0223 22:21:56.681878 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:21:56.681909 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:21:56.681918 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:21:56.681922 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:21:56.681926 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:21:56.681932 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:21:56.681938 80620 node_conditions.go:105] duration metric: took 6.549163ms to run NodePressure ...
I0223 22:21:56.681958 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0223 22:21:56.825426 80620 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0223 22:21:56.885114 80620 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0223 22:21:56.886787 80620 command_runner.go:130] ! W0223 22:21:56.690228 2212 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0223 22:21:56.886832 80620 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0223 22:21:56.886942 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
I0223 22:21:56.886954 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.886965 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.886975 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.889503 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:56.889525 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.889536 80620 round_trippers.go:580] Audit-Id: a9179ace-0f8b-41d7-acc9-15a5468f5431
I0223 22:21:56.889545 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.889552 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.889561 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.889569 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.889582 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.890569 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"740","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29273 chars]
I0223 22:21:56.891994 80620 kubeadm.go:784] kubelet initialised
I0223 22:21:56.892020 80620 kubeadm.go:785] duration metric: took 5.174392ms waiting for restarted kubelet to initialise ...
I0223 22:21:56.892029 80620 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 22:21:56.892094 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:21:56.892105 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.892115 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.892126 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.898216 80620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0223 22:21:56.898231 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.898240 80620 round_trippers.go:580] Audit-Id: 0cbc9df8-5ddc-4405-a649-09747f9c7e5c
I0223 22:21:56.898250 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.898260 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.898268 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.898280 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.898290 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.899125 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"408","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82574 chars]
I0223 22:21:56.901600 80620 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
I0223 22:21:56.901668 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:21:56.901680 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.901690 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.901697 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.906528 80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0223 22:21:56.906543 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.906552 80620 round_trippers.go:580] Audit-Id: c55b1693-f442-4306-a674-87f938885743
I0223 22:21:56.906561 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.906571 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.906580 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.906589 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.906602 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.906875 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:21:56.907276 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:56.907287 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.907294 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.907312 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.916593 80620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0223 22:21:56.916608 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.916616 80620 round_trippers.go:580] Audit-Id: 3b9497a6-fa4c-472e-b004-b0b6906e7a7f
I0223 22:21:56.916625 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.916634 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.916644 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.916652 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.916662 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.916802 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:56.917117 80620 pod_ready.go:97] node "multinode-773885" hosting pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.917132 80620 pod_ready.go:81] duration metric: took 15.512217ms waiting for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
E0223 22:21:56.917139 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.917145 80620 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:21:56.917197 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773885
I0223 22:21:56.917206 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.917213 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.917219 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.919079 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:21:56.919091 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.919097 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.919103 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.919108 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.919114 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.919120 80620 round_trippers.go:580] Audit-Id: 143d00d2-5e6b-44b2-a517-c658e2dc5a9f
I0223 22:21:56.919129 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.919346 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"740","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6289 chars]
I0223 22:21:56.919779 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:56.919793 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.919802 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.919808 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.921391 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:21:56.921406 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.921413 80620 round_trippers.go:580] Audit-Id: 9f5eac9e-078a-4143-9d6d-1b1de0a3102a
I0223 22:21:56.921423 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.921431 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.921440 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.921450 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.921460 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.921618 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:56.921957 80620 pod_ready.go:97] node "multinode-773885" hosting pod "etcd-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.921972 80620 pod_ready.go:81] duration metric: took 4.821003ms waiting for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
E0223 22:21:56.921981 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "etcd-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.921998 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:21:56.922055 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773885
I0223 22:21:56.922065 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.922076 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.922089 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.925010 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:56.925024 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.925033 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.925043 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.925052 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.925061 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.925070 80620 round_trippers.go:580] Audit-Id: 422d48f0-48d6-4c16-8b22-40f26357fc34
I0223 22:21:56.925075 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.925261 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773885","namespace":"kube-system","uid":"f9cbb81f-f7c6-47e7-9e3c-393680d5ee52","resourceVersion":"282","creationTimestamp":"2023-02-23T22:17:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.240:8443","kubernetes.io/config.hash":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.mirror":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.seen":"2023-02-23T22:17:25.440360314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
I0223 22:21:56.925639 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:56.925652 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.925659 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.925666 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.927337 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:21:56.927356 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.927365 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.927373 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.927382 80620 round_trippers.go:580] Audit-Id: 020b9a46-ef43-4607-90e4-5d3e9e7d1a08
I0223 22:21:56.927392 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.927401 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.927413 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.927579 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:56.927921 80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-apiserver-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.927940 80620 pod_ready.go:81] duration metric: took 5.928725ms waiting for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
E0223 22:21:56.927950 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-apiserver-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:56.927957 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:21:56.928048 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773885
I0223 22:21:56.928062 80620 round_trippers.go:469] Request Headers:
I0223 22:21:56.928072 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:56.928082 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:56.930936 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:56.930950 80620 round_trippers.go:577] Response Headers:
I0223 22:21:56.930956 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:56.930961 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:56.930968 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:56.930982 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:56 GMT
I0223 22:21:56.930995 80620 round_trippers.go:580] Audit-Id: 00aa01ac-5a84-4085-b3b5-f5f6d06fbe47
I0223 22:21:56.931005 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:56.931218 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773885","namespace":"kube-system","uid":"df36fee9-6048-45f6-b17a-679c2c9e3daf","resourceVersion":"739","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.mirror":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.seen":"2023-02-23T22:17:38.195450048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7424 chars]
I0223 22:21:57.067070 80620 request.go:622] Waited for 135.338555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:57.067135 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:57.067145 80620 round_trippers.go:469] Request Headers:
I0223 22:21:57.067163 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:57.067176 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:57.070119 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:57.070137 80620 round_trippers.go:577] Response Headers:
I0223 22:21:57.070143 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:57.070149 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:57 GMT
I0223 22:21:57.070155 80620 round_trippers.go:580] Audit-Id: 5d3402dd-3874-4131-9278-561b1ef77762
I0223 22:21:57.070161 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:57.070167 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:57.070178 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:57.070297 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:57.070668 80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-controller-manager-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:57.070691 80620 pod_ready.go:81] duration metric: took 142.727116ms waiting for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
E0223 22:21:57.070704 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-controller-manager-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:57.070713 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
I0223 22:21:57.267166 80620 request.go:622] Waited for 196.388978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
I0223 22:21:57.267229 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
I0223 22:21:57.267239 80620 round_trippers.go:469] Request Headers:
I0223 22:21:57.267252 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:57.267264 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:57.269968 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:57.269991 80620 round_trippers.go:577] Response Headers:
I0223 22:21:57.270000 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:57.270012 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:57 GMT
I0223 22:21:57.270084 80620 round_trippers.go:580] Audit-Id: 27049171-e30c-4ab9-a6ed-77da398a4856
I0223 22:21:57.270104 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:57.270113 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:57.270123 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:57.270261 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5d5vn","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3dfcd7d-3514-4286-93e9-f51f9f91c2d7","resourceVersion":"491","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
I0223 22:21:57.467146 80620 request.go:622] Waited for 196.375195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
I0223 22:21:57.467201 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
I0223 22:21:57.467207 80620 round_trippers.go:469] Request Headers:
I0223 22:21:57.467216 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:57.467235 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:57.469655 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:57.469680 80620 round_trippers.go:577] Response Headers:
I0223 22:21:57.469690 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:57.469716 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:57 GMT
I0223 22:21:57.469727 80620 round_trippers.go:580] Audit-Id: d420f22f-77bb-4122-826c-40660cb2d6fb
I0223 22:21:57.469734 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:57.469741 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:57.469749 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:57.469921 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m02","uid":"6657df38-0b72-4f36-a536-d4626cf22c9b","resourceVersion":"560","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
I0223 22:21:57.470230 80620 pod_ready.go:92] pod "kube-proxy-5d5vn" in "kube-system" namespace has status "Ready":"True"
I0223 22:21:57.470242 80620 pod_ready.go:81] duration metric: took 399.521519ms waiting for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
I0223 22:21:57.470250 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
I0223 22:21:57.667697 80620 request.go:622] Waited for 197.385632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
I0223 22:21:57.667766 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
I0223 22:21:57.667771 80620 round_trippers.go:469] Request Headers:
I0223 22:21:57.667778 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:57.667785 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:57.670278 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:57.670298 80620 round_trippers.go:577] Response Headers:
I0223 22:21:57.670308 80620 round_trippers.go:580] Audit-Id: 0128213a-339a-470c-989d-e7b486abebe1
I0223 22:21:57.670316 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:57.670324 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:57.670333 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:57.670342 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:57.670351 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:57 GMT
I0223 22:21:57.670879 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mdjks","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1cb3f4c-effa-4f0e-bbaa-ff792325a571","resourceVersion":"377","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
I0223 22:21:57.867695 80620 request.go:622] Waited for 196.388162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:57.867765 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:57.867770 80620 round_trippers.go:469] Request Headers:
I0223 22:21:57.867778 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:57.867784 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:57.870409 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:57.870431 80620 round_trippers.go:577] Response Headers:
I0223 22:21:57.870442 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:57.870452 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:57.870460 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:57.870466 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:57 GMT
I0223 22:21:57.870474 80620 round_trippers.go:580] Audit-Id: a53d6f4e-2730-4846-9147-87d2b5b1bc56
I0223 22:21:57.870483 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:57.870627 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:57.870935 80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-proxy-mdjks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:57.870951 80620 pod_ready.go:81] duration metric: took 400.694245ms waiting for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
E0223 22:21:57.870962 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-proxy-mdjks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:57.870970 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
I0223 22:21:58.067390 80620 request.go:622] Waited for 196.340619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
I0223 22:21:58.067527 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
I0223 22:21:58.067575 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.067593 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.067604 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.071162 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:21:58.071181 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.071191 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.071199 80620 round_trippers.go:580] Audit-Id: 49f82db0-63aa-4950-9457-03eeb73d1c6f
I0223 22:21:58.071207 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.071215 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.071223 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.071231 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.071517 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psgdt","generateName":"kube-proxy-","namespace":"kube-system","uid":"57d8204d-38f2-413f-8855-237db379cd27","resourceVersion":"721","creationTimestamp":"2023-02-23T22:19:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:19:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I0223 22:21:58.267044 80620 request.go:622] Waited for 195.100843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
I0223 22:21:58.267131 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
I0223 22:21:58.267138 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.267150 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.267161 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.269786 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:58.269805 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.269812 80620 round_trippers.go:580] Audit-Id: 28398178-6b4f-4ced-bd50-76b0a4e432c0
I0223 22:21:58.269818 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.269823 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.269828 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.269833 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.269846 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.270022 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m03","uid":"22181ea8-5030-450a-9927-f28a8241ef6a","resourceVersion":"732","creationTimestamp":"2023-02-23T22:20:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4329 chars]
I0223 22:21:58.270353 80620 pod_ready.go:92] pod "kube-proxy-psgdt" in "kube-system" namespace has status "Ready":"True"
I0223 22:21:58.270367 80620 pod_ready.go:81] duration metric: took 399.384993ms waiting for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
I0223 22:21:58.270378 80620 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:21:58.467272 80620 request.go:622] Waited for 196.812846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
I0223 22:21:58.467358 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
I0223 22:21:58.467365 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.467376 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.467390 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.470141 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:58.470169 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.470179 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.470188 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.470195 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.470204 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.470213 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.470221 80620 round_trippers.go:580] Audit-Id: e5044b8f-aa40-4729-93fe-c25c71ca551c
I0223 22:21:58.470349 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773885","namespace":"kube-system","uid":"ecc1fa39-40dc-4d57-be46-8e9a01431180","resourceVersion":"742","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.mirror":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.seen":"2023-02-23T22:17:38.195431871Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5136 chars]
I0223 22:21:58.667199 80620 request.go:622] Waited for 196.342723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:58.667264 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:58.667275 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.667288 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.667318 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.669825 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:58.669849 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.669860 80620 round_trippers.go:580] Audit-Id: 8c1fc862-a3d1-4b08-b8c2-f41fa6fd3cd6
I0223 22:21:58.669869 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.669877 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.669885 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.669899 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.669910 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.670129 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:58.670496 80620 pod_ready.go:97] node "multinode-773885" hosting pod "kube-scheduler-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:58.670517 80620 pod_ready.go:81] duration metric: took 400.130245ms waiting for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
E0223 22:21:58.670528 80620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-773885" hosting pod "kube-scheduler-multinode-773885" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-773885" has status "Ready":"False"
I0223 22:21:58.670539 80620 pod_ready.go:38] duration metric: took 1.778499138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 22:21:58.670563 80620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0223 22:21:58.684600 80620 command_runner.go:130] > -16
I0223 22:21:58.684633 80620 ops.go:34] apiserver oom_adj: -16
I0223 22:21:58.684642 80620 kubeadm.go:637] restartCluster took 21.880365731s
I0223 22:21:58.684651 80620 kubeadm.go:403] StartCluster complete in 21.912911073s
I0223 22:21:58.684672 80620 settings.go:142] acquiring lock: {Name:mk906211444ec0c60982da29f94c92fb57d72ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 22:21:58.684774 80620 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15909-59858/kubeconfig
I0223 22:21:58.685563 80620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-59858/kubeconfig: {Name:mkb3ee8537c1c29485268d18a34139db6a7d5ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 22:21:58.685892 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0223 22:21:58.686005 80620 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0223 22:21:58.686136 80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 22:21:58.686171 80620 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15909-59858/kubeconfig
I0223 22:21:58.687964 80620 out.go:177] * Enabled addons:
I0223 22:21:58.686508 80620 kapi.go:59] client config for multinode-773885: &rest.Config{Host:"https://192.168.39.240:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/client.key", CAFile:"/home/jenkins/minikube-integration/15909-59858/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 22:21:58.689318 80620 addons.go:492] enable addons completed in 3.316295ms: enabled=[]
I0223 22:21:58.689636 80620 round_trippers.go:463] GET https://192.168.39.240:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0223 22:21:58.689653 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.689665 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.689674 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.692405 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:58.692425 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.692435 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.692448 80620 round_trippers.go:580] Audit-Id: 2916b551-1504-4ee6-8f0b-8bb9b49c72fe
I0223 22:21:58.692457 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.692474 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.692486 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.692499 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.692512 80620 round_trippers.go:580] Content-Length: 291
I0223 22:21:58.692541 80620 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"88095e59-4c47-4f2e-9af0-397e7cc508de","resourceVersion":"743","creationTimestamp":"2023-02-23T22:17:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0223 22:21:58.692706 80620 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-773885" context rescaled to 1 replicas
I0223 22:21:58.692739 80620 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0223 22:21:58.694468 80620 out.go:177] * Verifying Kubernetes components...
I0223 22:21:58.696081 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 22:21:58.815357 80620 command_runner.go:130] > apiVersion: v1
I0223 22:21:58.815388 80620 command_runner.go:130] > data:
I0223 22:21:58.815395 80620 command_runner.go:130] > Corefile: |
I0223 22:21:58.815401 80620 command_runner.go:130] > .:53 {
I0223 22:21:58.815406 80620 command_runner.go:130] > log
I0223 22:21:58.815414 80620 command_runner.go:130] > errors
I0223 22:21:58.815423 80620 command_runner.go:130] > health {
I0223 22:21:58.815430 80620 command_runner.go:130] > lameduck 5s
I0223 22:21:58.815435 80620 command_runner.go:130] > }
I0223 22:21:58.815443 80620 command_runner.go:130] > ready
I0223 22:21:58.815455 80620 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0223 22:21:58.815461 80620 command_runner.go:130] > pods insecure
I0223 22:21:58.815470 80620 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0223 22:21:58.815479 80620 command_runner.go:130] > ttl 30
I0223 22:21:58.815485 80620 command_runner.go:130] > }
I0223 22:21:58.815495 80620 command_runner.go:130] > prometheus :9153
I0223 22:21:58.815501 80620 command_runner.go:130] > hosts {
I0223 22:21:58.815510 80620 command_runner.go:130] > 192.168.39.1 host.minikube.internal
I0223 22:21:58.815517 80620 command_runner.go:130] > fallthrough
I0223 22:21:58.815526 80620 command_runner.go:130] > }
I0223 22:21:58.815537 80620 command_runner.go:130] > forward . /etc/resolv.conf {
I0223 22:21:58.815545 80620 command_runner.go:130] > max_concurrent 1000
I0223 22:21:58.815553 80620 command_runner.go:130] > }
I0223 22:21:58.815563 80620 command_runner.go:130] > cache 30
I0223 22:21:58.815574 80620 command_runner.go:130] > loop
I0223 22:21:58.815583 80620 command_runner.go:130] > reload
I0223 22:21:58.815595 80620 command_runner.go:130] > loadbalance
I0223 22:21:58.815605 80620 command_runner.go:130] > }
I0223 22:21:58.815614 80620 command_runner.go:130] > kind: ConfigMap
I0223 22:21:58.815623 80620 command_runner.go:130] > metadata:
I0223 22:21:58.815631 80620 command_runner.go:130] > creationTimestamp: "2023-02-23T22:17:37Z"
I0223 22:21:58.815641 80620 command_runner.go:130] > name: coredns
I0223 22:21:58.815651 80620 command_runner.go:130] > namespace: kube-system
I0223 22:21:58.815660 80620 command_runner.go:130] > resourceVersion: "360"
I0223 22:21:58.815671 80620 command_runner.go:130] > uid: 79632023-f720-4e05-a063-411c24789887
I0223 22:21:58.818640 80620 node_ready.go:35] waiting up to 6m0s for node "multinode-773885" to be "Ready" ...
I0223 22:21:58.818784 80620 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0223 22:21:58.866997 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:58.867022 80620 round_trippers.go:469] Request Headers:
I0223 22:21:58.867036 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:58.867046 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:58.869514 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:58.869542 80620 round_trippers.go:577] Response Headers:
I0223 22:21:58.869553 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:58.869562 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:58.869568 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:58.869573 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:58 GMT
I0223 22:21:58.869579 80620 round_trippers.go:580] Audit-Id: ef8ca951-03a3-4673-b3b0-d6e949e3aba1
I0223 22:21:58.869586 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:58.869696 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:59.370801 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:59.370828 80620 round_trippers.go:469] Request Headers:
I0223 22:21:59.370840 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:59.370850 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:59.373237 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:59.373263 80620 round_trippers.go:577] Response Headers:
I0223 22:21:59.373275 80620 round_trippers.go:580] Audit-Id: cc5c5f53-65a1-48f1-8d30-2983a96a1517
I0223 22:21:59.373284 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:59.373292 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:59.373301 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:59.373310 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:59.373320 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:59 GMT
I0223 22:21:59.373432 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:21:59.871104 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:21:59.871130 80620 round_trippers.go:469] Request Headers:
I0223 22:21:59.871142 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:21:59.871152 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:21:59.873824 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:21:59.873849 80620 round_trippers.go:577] Response Headers:
I0223 22:21:59.873860 80620 round_trippers.go:580] Audit-Id: a0c12052-13ba-4532-b2cb-ef0712468e2c
I0223 22:21:59.873868 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:21:59.873877 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:21:59.873890 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:21:59.873898 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:21:59.873910 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:21:59 GMT
I0223 22:21:59.874344 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:00.371108 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:00.371138 80620 round_trippers.go:469] Request Headers:
I0223 22:22:00.371150 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:00.371160 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:00.373796 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:00.373818 80620 round_trippers.go:577] Response Headers:
I0223 22:22:00.373826 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:00.373832 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:00.373837 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:00.373843 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:00 GMT
I0223 22:22:00.373849 80620 round_trippers.go:580] Audit-Id: 6d76f1af-c5ab-44d4-ac95-d4a732c54af0
I0223 22:22:00.373861 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:00.374155 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:00.870897 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:00.870933 80620 round_trippers.go:469] Request Headers:
I0223 22:22:00.870942 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:00.870951 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:00.873427 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:00.873451 80620 round_trippers.go:577] Response Headers:
I0223 22:22:00.873462 80620 round_trippers.go:580] Audit-Id: 494f6db1-2d29-4a14-be25-f5115f464c6c
I0223 22:22:00.873471 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:00.873485 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:00.873495 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:00.873504 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:00.873512 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:00 GMT
I0223 22:22:00.873654 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:00.874130 80620 node_ready.go:58] node "multinode-773885" has status "Ready":"False"
I0223 22:22:01.370246 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:01.370268 80620 round_trippers.go:469] Request Headers:
I0223 22:22:01.370279 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:01.370286 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:01.372742 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:01.372768 80620 round_trippers.go:577] Response Headers:
I0223 22:22:01.372779 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:01.372787 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:01.372796 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:01.372808 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:01 GMT
I0223 22:22:01.372816 80620 round_trippers.go:580] Audit-Id: d657d94b-1177-4e47-9c6a-10517add9c29
I0223 22:22:01.372827 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:01.372974 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:01.870635 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:01.870664 80620 round_trippers.go:469] Request Headers:
I0223 22:22:01.870672 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:01.870679 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:01.873350 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:01.873373 80620 round_trippers.go:577] Response Headers:
I0223 22:22:01.873386 80620 round_trippers.go:580] Audit-Id: 3aae1eee-a094-424f-bbd3-1cc775206a05
I0223 22:22:01.873395 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:01.873403 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:01.873410 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:01.873419 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:01.873428 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:01 GMT
I0223 22:22:01.873701 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:02.370356 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:02.370378 80620 round_trippers.go:469] Request Headers:
I0223 22:22:02.370386 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:02.370392 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:02.373961 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:02.373983 80620 round_trippers.go:577] Response Headers:
I0223 22:22:02.373992 80620 round_trippers.go:580] Audit-Id: 2d8ae255-30e7-495f-82a8-f977058510be
I0223 22:22:02.374000 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:02.374008 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:02.374018 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:02.374028 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:02.374041 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:02 GMT
I0223 22:22:02.374362 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:02.871107 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:02.871133 80620 round_trippers.go:469] Request Headers:
I0223 22:22:02.871148 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:02.871157 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:02.873653 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:02.873672 80620 round_trippers.go:577] Response Headers:
I0223 22:22:02.873680 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:02.873686 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:02.873691 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:02 GMT
I0223 22:22:02.873697 80620 round_trippers.go:580] Audit-Id: 88e3a2a0-3a44-456c-a122-9443f9691153
I0223 22:22:02.873706 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:02.873715 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:02.874022 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:02.874437 80620 node_ready.go:58] node "multinode-773885" has status "Ready":"False"
I0223 22:22:03.370842 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:03.370869 80620 round_trippers.go:469] Request Headers:
I0223 22:22:03.370886 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:03.370894 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:03.372889 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:03.372909 80620 round_trippers.go:577] Response Headers:
I0223 22:22:03.372916 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:03.372922 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:03 GMT
I0223 22:22:03.372928 80620 round_trippers.go:580] Audit-Id: 553e23aa-d7b4-4f46-b968-491b3c19b7a9
I0223 22:22:03.372934 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:03.372942 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:03.372954 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:03.373055 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:03.870742 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:03.870764 80620 round_trippers.go:469] Request Headers:
I0223 22:22:03.870773 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:03.870779 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:03.873449 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:03.873469 80620 round_trippers.go:577] Response Headers:
I0223 22:22:03.873476 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:03.873482 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:03.873487 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:03 GMT
I0223 22:22:03.873493 80620 round_trippers.go:580] Audit-Id: d10ccbbb-11df-43ab-9526-c648f4eb57ab
I0223 22:22:03.873499 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:03.873504 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:03.873699 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:04.370303 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:04.370324 80620 round_trippers.go:469] Request Headers:
I0223 22:22:04.370332 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:04.370339 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:04.372813 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:04.372839 80620 round_trippers.go:577] Response Headers:
I0223 22:22:04.372851 80620 round_trippers.go:580] Audit-Id: bdad9e22-9644-4e1c-8f6c-ae6fc5d4caf1
I0223 22:22:04.372861 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:04.372870 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:04.372879 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:04.372893 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:04.372902 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:04 GMT
I0223 22:22:04.373649 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"736","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5440 chars]
I0223 22:22:04.870293 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:04.870319 80620 round_trippers.go:469] Request Headers:
I0223 22:22:04.870327 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:04.870333 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:04.873111 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:04.873137 80620 round_trippers.go:577] Response Headers:
I0223 22:22:04.873148 80620 round_trippers.go:580] Audit-Id: 356034ea-3c99-4375-a746-070c2cc9db4c
I0223 22:22:04.873157 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:04.873164 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:04.873172 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:04.873182 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:04.873192 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:04 GMT
I0223 22:22:04.873417 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:04.873740 80620 node_ready.go:49] node "multinode-773885" has status "Ready":"True"
I0223 22:22:04.873759 80620 node_ready.go:38] duration metric: took 6.055088164s waiting for node "multinode-773885" to be "Ready" ...
I0223 22:22:04.873768 80620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 22:22:04.873821 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:22:04.873828 80620 round_trippers.go:469] Request Headers:
I0223 22:22:04.873836 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:04.873842 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:04.877171 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:04.877190 80620 round_trippers.go:577] Response Headers:
I0223 22:22:04.877199 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:04.877209 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:04.877217 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:04.877225 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:04.877234 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:04 GMT
I0223 22:22:04.877242 80620 round_trippers.go:580] Audit-Id: ea2e3ce7-5ec8-4de8-affe-00217b9f0f75
I0223 22:22:04.878185 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"788"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83657 chars]
I0223 22:22:04.880661 80620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
I0223 22:22:04.880721 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:04.880729 80620 round_trippers.go:469] Request Headers:
I0223 22:22:04.880736 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:04.880743 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:04.882620 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:04.882637 80620 round_trippers.go:577] Response Headers:
I0223 22:22:04.882643 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:04.882649 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:04 GMT
I0223 22:22:04.882654 80620 round_trippers.go:580] Audit-Id: b8c34b52-e089-4d20-abac-792cd26a154e
I0223 22:22:04.882660 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:04.882665 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:04.882671 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:04.882780 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:04.883130 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:04.883141 80620 round_trippers.go:469] Request Headers:
I0223 22:22:04.883148 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:04.883154 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:04.885545 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:04.885559 80620 round_trippers.go:577] Response Headers:
I0223 22:22:04.885566 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:04.885571 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:04.885577 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:04.885582 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:04 GMT
I0223 22:22:04.885590 80620 round_trippers.go:580] Audit-Id: a935859f-b8a0-4ddc-8ffe-b88f374b4617
I0223 22:22:04.885597 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:04.885668 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:05.386735 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:05.386762 80620 round_trippers.go:469] Request Headers:
I0223 22:22:05.386775 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:05.386785 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:05.389024 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:05.389044 80620 round_trippers.go:577] Response Headers:
I0223 22:22:05.389055 80620 round_trippers.go:580] Audit-Id: 5162732a-6a2d-4976-bd1a-d7a30dbd6874
I0223 22:22:05.389063 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:05.389070 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:05.389082 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:05.389095 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:05.389103 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:05 GMT
I0223 22:22:05.389223 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:05.389693 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:05.389706 80620 round_trippers.go:469] Request Headers:
I0223 22:22:05.389713 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:05.389722 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:05.391445 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:05.391462 80620 round_trippers.go:577] Response Headers:
I0223 22:22:05.391469 80620 round_trippers.go:580] Audit-Id: 152ffe10-665f-45a2-8a81-8746544ba57e
I0223 22:22:05.391475 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:05.391482 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:05.391491 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:05.391501 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:05.391511 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:05 GMT
I0223 22:22:05.391627 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:05.886225 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:05.886248 80620 round_trippers.go:469] Request Headers:
I0223 22:22:05.886257 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:05.886264 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:05.888353 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:05.888389 80620 round_trippers.go:577] Response Headers:
I0223 22:22:05.888399 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:05.888408 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:05 GMT
I0223 22:22:05.888417 80620 round_trippers.go:580] Audit-Id: cc5f0143-2508-446f-907a-56ab533f7430
I0223 22:22:05.888426 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:05.888438 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:05.888446 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:05.889024 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:05.889458 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:05.889469 80620 round_trippers.go:469] Request Headers:
I0223 22:22:05.889476 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:05.889484 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:05.891242 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:05.891257 80620 round_trippers.go:577] Response Headers:
I0223 22:22:05.891263 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:05.891269 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:05.891275 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:05.891283 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:05.891293 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:05 GMT
I0223 22:22:05.891319 80620 round_trippers.go:580] Audit-Id: ee3b00fc-914b-4eba-8a45-e4597d8f6d25
I0223 22:22:05.891627 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:06.386281 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:06.386303 80620 round_trippers.go:469] Request Headers:
I0223 22:22:06.386311 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:06.386326 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:06.388974 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:06.388992 80620 round_trippers.go:577] Response Headers:
I0223 22:22:06.388999 80620 round_trippers.go:580] Audit-Id: 220c9abc-71ea-4bf1-984a-8b6e023377f1
I0223 22:22:06.389014 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:06.389026 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:06.389038 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:06.389046 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:06.389052 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:06 GMT
I0223 22:22:06.389842 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:06.390308 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:06.390321 80620 round_trippers.go:469] Request Headers:
I0223 22:22:06.390328 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:06.390337 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:06.391935 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:06.391953 80620 round_trippers.go:577] Response Headers:
I0223 22:22:06.391962 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:06.391970 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:06.391980 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:06 GMT
I0223 22:22:06.391989 80620 round_trippers.go:580] Audit-Id: 7685b789-c707-4d17-88af-7145585bce78
I0223 22:22:06.391998 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:06.392010 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:06.392362 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:06.886127 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:06.886150 80620 round_trippers.go:469] Request Headers:
I0223 22:22:06.886159 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:06.886165 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:06.889975 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:06.890001 80620 round_trippers.go:577] Response Headers:
I0223 22:22:06.890013 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:06.890023 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:06.890035 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:06 GMT
I0223 22:22:06.890048 80620 round_trippers.go:580] Audit-Id: 87848966-24d5-45b3-a7aa-56f65410f508
I0223 22:22:06.890057 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:06.890070 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:06.890267 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:06.890721 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:06.890734 80620 round_trippers.go:469] Request Headers:
I0223 22:22:06.890741 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:06.890747 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:06.895655 80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0223 22:22:06.895674 80620 round_trippers.go:577] Response Headers:
I0223 22:22:06.895684 80620 round_trippers.go:580] Audit-Id: f054bb7d-1199-4b8d-b3f0-4c0274f1d63d
I0223 22:22:06.895693 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:06.895702 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:06.895713 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:06.895724 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:06.895736 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:06 GMT
I0223 22:22:06.896139 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:06.896420 80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
I0223 22:22:07.386841 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:07.386862 80620 round_trippers.go:469] Request Headers:
I0223 22:22:07.386871 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:07.386878 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:07.389998 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:07.390025 80620 round_trippers.go:577] Response Headers:
I0223 22:22:07.390036 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:07.390046 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:07.390054 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:07.390062 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:07 GMT
I0223 22:22:07.390070 80620 round_trippers.go:580] Audit-Id: d6b7ea92-112f-499d-a61b-86d8245e8558
I0223 22:22:07.390078 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:07.390244 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:07.390679 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:07.390690 80620 round_trippers.go:469] Request Headers:
I0223 22:22:07.390698 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:07.390704 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:07.392927 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:07.392948 80620 round_trippers.go:577] Response Headers:
I0223 22:22:07.392958 80620 round_trippers.go:580] Audit-Id: e7498617-1172-42fd-b07a-d2d628e52a21
I0223 22:22:07.392969 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:07.392988 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:07.393002 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:07.393011 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:07.393022 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:07 GMT
I0223 22:22:07.393607 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:07.886231 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:07.886254 80620 round_trippers.go:469] Request Headers:
I0223 22:22:07.886277 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:07.886284 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:07.889328 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:07.889351 80620 round_trippers.go:577] Response Headers:
I0223 22:22:07.889359 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:07.889366 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:07 GMT
I0223 22:22:07.889371 80620 round_trippers.go:580] Audit-Id: 996a8d26-ab61-4eb1-a206-c0fb32514e06
I0223 22:22:07.889377 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:07.889382 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:07.889388 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:07.889970 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:07.890413 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:07.890425 80620 round_trippers.go:469] Request Headers:
I0223 22:22:07.890432 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:07.890439 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:07.897920 80620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0223 22:22:07.897934 80620 round_trippers.go:577] Response Headers:
I0223 22:22:07.897941 80620 round_trippers.go:580] Audit-Id: 4221b7db-ff10-4443-aed5-78c6f7b9296c
I0223 22:22:07.897947 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:07.897953 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:07.897958 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:07.897966 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:07.897972 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:07 GMT
I0223 22:22:07.898379 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:08.386191 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:08.386213 80620 round_trippers.go:469] Request Headers:
I0223 22:22:08.386224 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:08.386234 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:08.388618 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:08.388637 80620 round_trippers.go:577] Response Headers:
I0223 22:22:08.388644 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:08.388652 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:08.388660 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:08.388668 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:08 GMT
I0223 22:22:08.388689 80620 round_trippers.go:580] Audit-Id: 9fd3f354-aaea-4470-b0a9-a62bb9cf4b81
I0223 22:22:08.388695 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:08.389016 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:08.389462 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:08.389474 80620 round_trippers.go:469] Request Headers:
I0223 22:22:08.389484 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:08.389493 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:08.391347 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:08.391366 80620 round_trippers.go:577] Response Headers:
I0223 22:22:08.391376 80620 round_trippers.go:580] Audit-Id: d2b922bc-cc07-4d6a-a919-5b81247f7675
I0223 22:22:08.391385 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:08.391396 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:08.391405 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:08.391414 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:08.391419 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:08 GMT
I0223 22:22:08.391692 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:08.886358 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:08.886387 80620 round_trippers.go:469] Request Headers:
I0223 22:22:08.886397 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:08.886403 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:08.889174 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:08.889200 80620 round_trippers.go:577] Response Headers:
I0223 22:22:08.889209 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:08 GMT
I0223 22:22:08.889215 80620 round_trippers.go:580] Audit-Id: 7d35bf13-e46b-4b70-b379-eef2287d1352
I0223 22:22:08.889220 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:08.889226 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:08.889231 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:08.889236 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:08.889437 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:08.889910 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:08.889923 80620 round_trippers.go:469] Request Headers:
I0223 22:22:08.889931 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:08.889937 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:08.892893 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:08.892908 80620 round_trippers.go:577] Response Headers:
I0223 22:22:08.892914 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:08 GMT
I0223 22:22:08.892919 80620 round_trippers.go:580] Audit-Id: c156c99d-e130-4f55-b4e3-14616a7ba70f
I0223 22:22:08.892927 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:08.892936 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:08.892945 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:08.892956 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:08.893597 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:09.386240 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:09.386263 80620 round_trippers.go:469] Request Headers:
I0223 22:22:09.386272 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:09.386278 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:09.388959 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:09.388983 80620 round_trippers.go:577] Response Headers:
I0223 22:22:09.388991 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:09.388997 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:09.389002 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:09 GMT
I0223 22:22:09.389007 80620 round_trippers.go:580] Audit-Id: b1b9610c-e081-4bbb-837e-8be581f68475
I0223 22:22:09.389013 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:09.389018 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:09.389296 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:09.389849 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:09.389877 80620 round_trippers.go:469] Request Headers:
I0223 22:22:09.389888 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:09.389895 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:09.391871 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:09.391888 80620 round_trippers.go:577] Response Headers:
I0223 22:22:09.391895 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:09.391900 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:09.391906 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:09.391911 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:09.391916 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:09 GMT
I0223 22:22:09.391930 80620 round_trippers.go:580] Audit-Id: 002294de-1a26-4570-886e-0a7800195800
I0223 22:22:09.392074 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:09.392445 80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
I0223 22:22:09.886775 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:09.886796 80620 round_trippers.go:469] Request Headers:
I0223 22:22:09.886805 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:09.886812 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:09.889680 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:09.889703 80620 round_trippers.go:577] Response Headers:
I0223 22:22:09.889710 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:09.889716 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:09 GMT
I0223 22:22:09.889722 80620 round_trippers.go:580] Audit-Id: 3a94f330-f28f-46c4-a648-51998b06aed1
I0223 22:22:09.889730 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:09.889740 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:09.889749 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:09.889960 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:09.890412 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:09.890426 80620 round_trippers.go:469] Request Headers:
I0223 22:22:09.890433 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:09.890439 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:09.893112 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:09.893124 80620 round_trippers.go:577] Response Headers:
I0223 22:22:09.893131 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:09 GMT
I0223 22:22:09.893136 80620 round_trippers.go:580] Audit-Id: f1b19073-36ac-4a4c-b6c5-aa4b69ec1776
I0223 22:22:09.893141 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:09.893148 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:09.893156 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:09.893165 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:09.893436 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:10.386076 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:10.386100 80620 round_trippers.go:469] Request Headers:
I0223 22:22:10.386109 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:10.386115 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:10.388462 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:10.388484 80620 round_trippers.go:577] Response Headers:
I0223 22:22:10.388491 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:10.388497 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:10.388502 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:10 GMT
I0223 22:22:10.388508 80620 round_trippers.go:580] Audit-Id: b0c0f970-513c-4958-8f0f-9012dbfa36d5
I0223 22:22:10.388513 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:10.388518 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:10.388755 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"745","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6543 chars]
I0223 22:22:10.389295 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:10.389312 80620 round_trippers.go:469] Request Headers:
I0223 22:22:10.389323 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:10.389333 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:10.391529 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:10.391550 80620 round_trippers.go:577] Response Headers:
I0223 22:22:10.391560 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:10.391568 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:10.391574 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:10.391582 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:10.391587 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:10 GMT
I0223 22:22:10.391593 80620 round_trippers.go:580] Audit-Id: 10261026-5803-485c-834a-bf21f0cb79e3
I0223 22:22:10.391676 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:10.886276 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:10.886298 80620 round_trippers.go:469] Request Headers:
I0223 22:22:10.886310 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:10.886319 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:10.890190 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:10.890215 80620 round_trippers.go:577] Response Headers:
I0223 22:22:10.890222 80620 round_trippers.go:580] Audit-Id: b6386ff9-de93-4709-b3ef-d903d0d5a9cc
I0223 22:22:10.890228 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:10.890234 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:10.890239 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:10.890245 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:10.890251 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:10 GMT
I0223 22:22:10.890402 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"836","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
I0223 22:22:10.890869 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:10.890883 80620 round_trippers.go:469] Request Headers:
I0223 22:22:10.890893 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:10.890902 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:10.895016 80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0223 22:22:10.895035 80620 round_trippers.go:577] Response Headers:
I0223 22:22:10.895046 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:10.895055 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:10.895064 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:10.895073 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:10 GMT
I0223 22:22:10.895080 80620 round_trippers.go:580] Audit-Id: 2e664d84-586c-4ab6-94bc-ba77835a654d
I0223 22:22:10.895085 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:10.895436 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.386154 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:11.386182 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.386193 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.386202 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.388774 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.388795 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.388805 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.388814 80620 round_trippers.go:580] Audit-Id: 0b53d934-8f77-4a2f-bbe6-92be4d3d5c17
I0223 22:22:11.388822 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.388831 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.388848 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.388858 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.389048 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"836","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6720 chars]
I0223 22:22:11.389509 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:11.389522 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.389532 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.389541 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.391436 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:11.391458 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.391475 80620 round_trippers.go:580] Audit-Id: f0d5469c-1828-43e0-99ac-880d59c5ca18
I0223 22:22:11.391486 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.391496 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.391502 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.391508 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.391514 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.392144 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.392489 80620 pod_ready.go:102] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"False"
I0223 22:22:11.886705 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-ktr7h
I0223 22:22:11.886728 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.886740 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.886747 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.897949 80620 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0223 22:22:11.897972 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.897979 80620 round_trippers.go:580] Audit-Id: ee3fad82-cb14-466d-be80-d787cdfe18c6
I0223 22:22:11.897988 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.897996 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.898005 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.898014 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.898023 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.898203 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6491 chars]
I0223 22:22:11.898695 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:11.898709 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.898716 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.898722 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.901522 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.901537 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.901546 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.901555 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.901565 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.901574 80620 round_trippers.go:580] Audit-Id: 67ab3f98-4824-4d37-9baa-d6fde6241cd3
I0223 22:22:11.901583 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.901592 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.901884 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.902261 80620 pod_ready.go:92] pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:11.902281 80620 pod_ready.go:81] duration metric: took 7.021599209s waiting for pod "coredns-787d4945fb-ktr7h" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.902292 80620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.902345 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773885
I0223 22:22:11.902362 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.902374 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.902387 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.905539 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:11.905555 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.905564 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.905573 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.905584 80620 round_trippers.go:580] Audit-Id: b11ef536-b4c5-482e-aa7c-76d59636d5d2
I0223 22:22:11.905592 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.905600 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.905608 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.906366 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773885","namespace":"kube-system","uid":"60237072-2e86-40a3-90d9-87b8bccfb848","resourceVersion":"802","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.240:2379","kubernetes.io/config.hash":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.mirror":"91b4cc1c44cea64bca98c39307e93683","kubernetes.io/config.seen":"2023-02-23T22:17:38.195447866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6065 chars]
I0223 22:22:11.906856 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:11.906876 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.906892 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.906903 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.908814 80620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0223 22:22:11.908827 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.908833 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.908838 80620 round_trippers.go:580] Audit-Id: afa24933-99a3-4732-ab8c-89f796285545
I0223 22:22:11.908844 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.908849 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.908860 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.908868 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.909140 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.909495 80620 pod_ready.go:92] pod "etcd-multinode-773885" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:11.909509 80620 pod_ready.go:81] duration metric: took 7.209083ms waiting for pod "etcd-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.909528 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.909582 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773885
I0223 22:22:11.909592 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.909603 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.909616 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.911700 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.911720 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.911729 80620 round_trippers.go:580] Audit-Id: 779ea438-bd06-40b6-ba45-805cc766e96d
I0223 22:22:11.911737 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.911745 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.911754 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.911762 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.911772 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.911987 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773885","namespace":"kube-system","uid":"f9cbb81f-f7c6-47e7-9e3c-393680d5ee52","resourceVersion":"793","creationTimestamp":"2023-02-23T22:17:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.240:8443","kubernetes.io/config.hash":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.mirror":"e9459d167995578fa153c781fb0ec958","kubernetes.io/config.seen":"2023-02-23T22:17:25.440360314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7599 chars]
I0223 22:22:11.912445 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:11.912459 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.912475 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.912485 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.914590 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.914610 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.914619 80620 round_trippers.go:580] Audit-Id: 05b9d526-86d7-43a1-a29b-8b19eb1394d1
I0223 22:22:11.914628 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.914637 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.914659 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.914670 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.914685 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.914841 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.915184 80620 pod_ready.go:92] pod "kube-apiserver-multinode-773885" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:11.915198 80620 pod_ready.go:81] duration metric: took 5.656927ms waiting for pod "kube-apiserver-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.915207 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.915261 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773885
I0223 22:22:11.915271 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.915282 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.915294 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.917370 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.917390 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.917400 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.917407 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.917416 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.917424 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.917434 80620 round_trippers.go:580] Audit-Id: 1c6ec0cd-a712-46c0-9127-fc5aaaf54dca
I0223 22:22:11.917444 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.917666 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773885","namespace":"kube-system","uid":"df36fee9-6048-45f6-b17a-679c2c9e3daf","resourceVersion":"825","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.mirror":"0e6f7531ae8f8d5272d8480f1366600f","kubernetes.io/config.seen":"2023-02-23T22:17:38.195450048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7162 chars]
I0223 22:22:11.918056 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:11.918067 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.918078 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.918090 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.920329 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.920349 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.920359 80620 round_trippers.go:580] Audit-Id: 4abce7c0-9628-4d94-8005-2a2dfc23a6e7
I0223 22:22:11.920367 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.920377 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.920386 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.920394 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.920410 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.921292 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:11.921655 80620 pod_ready.go:92] pod "kube-controller-manager-multinode-773885" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:11.921672 80620 pod_ready.go:81] duration metric: took 6.456858ms waiting for pod "kube-controller-manager-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.921682 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.921744 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5d5vn
I0223 22:22:11.921759 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.921770 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.921788 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.923979 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.923999 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.924008 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.924016 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.924024 80620 round_trippers.go:580] Audit-Id: 0efbb785-cf58-48c7-81ba-79e7df1fffe6
I0223 22:22:11.924037 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.924045 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.924054 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.924324 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5d5vn","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3dfcd7d-3514-4286-93e9-f51f9f91c2d7","resourceVersion":"491","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
I0223 22:22:11.924642 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m02
I0223 22:22:11.924651 80620 round_trippers.go:469] Request Headers:
I0223 22:22:11.924659 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:11.924668 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:11.927145 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:11.927164 80620 round_trippers.go:577] Response Headers:
I0223 22:22:11.927174 80620 round_trippers.go:580] Audit-Id: d525fadc-555c-4d29-8ba1-8f98e144287a
I0223 22:22:11.927190 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:11.927201 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:11.927209 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:11.927221 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:11.927230 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:11 GMT
I0223 22:22:11.927662 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m02","uid":"6657df38-0b72-4f36-a536-d4626cf22c9b","resourceVersion":"560","creationTimestamp":"2023-02-23T22:18:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:18:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4513 chars]
I0223 22:22:11.927907 80620 pod_ready.go:92] pod "kube-proxy-5d5vn" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:11.927917 80620 pod_ready.go:81] duration metric: took 6.229355ms waiting for pod "kube-proxy-5d5vn" in "kube-system" namespace to be "Ready" ...
I0223 22:22:11.927924 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
I0223 22:22:12.087372 80620 request.go:622] Waited for 159.388811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
I0223 22:22:12.087472 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mdjks
I0223 22:22:12.087484 80620 round_trippers.go:469] Request Headers:
I0223 22:22:12.087494 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:12.087506 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:12.090953 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:12.090975 80620 round_trippers.go:577] Response Headers:
I0223 22:22:12.090982 80620 round_trippers.go:580] Audit-Id: d476c971-82f9-4e13-bf24-ac1d0a7e0132
I0223 22:22:12.090988 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:12.091000 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:12.091015 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:12.091023 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:12.091034 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:12 GMT
I0223 22:22:12.091257 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mdjks","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1cb3f4c-effa-4f0e-bbaa-ff792325a571","resourceVersion":"751","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
I0223 22:22:12.287106 80620 request.go:622] Waited for 195.345935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:12.287171 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:12.287176 80620 round_trippers.go:469] Request Headers:
I0223 22:22:12.287184 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:12.287190 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:12.290450 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:12.290482 80620 round_trippers.go:577] Response Headers:
I0223 22:22:12.290493 80620 round_trippers.go:580] Audit-Id: 293be0f3-4481-47c8-8397-f5bcd5d19b91
I0223 22:22:12.290503 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:12.290511 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:12.290527 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:12.290541 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:12.290550 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:12 GMT
I0223 22:22:12.290685 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:12.290991 80620 pod_ready.go:92] pod "kube-proxy-mdjks" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:12.291002 80620 pod_ready.go:81] duration metric: took 363.073923ms waiting for pod "kube-proxy-mdjks" in "kube-system" namespace to be "Ready" ...
I0223 22:22:12.291011 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
I0223 22:22:12.487380 80620 request.go:622] Waited for 196.297867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
I0223 22:22:12.487451 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psgdt
I0223 22:22:12.487455 80620 round_trippers.go:469] Request Headers:
I0223 22:22:12.487463 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:12.487470 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:12.490351 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:12.490369 80620 round_trippers.go:577] Response Headers:
I0223 22:22:12.490376 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:12.490382 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:12.490390 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:12.490396 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:12.490402 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:12 GMT
I0223 22:22:12.490408 80620 round_trippers.go:580] Audit-Id: 3101849d-f3a0-4ede-99b6-2a380cea5ba6
I0223 22:22:12.490636 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psgdt","generateName":"kube-proxy-","namespace":"kube-system","uid":"57d8204d-38f2-413f-8855-237db379cd27","resourceVersion":"721","creationTimestamp":"2023-02-23T22:19:46Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c09d151-d17b-498c-933a-7c23c0986b3e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:19:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c09d151-d17b-498c-933a-7c23c0986b3e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
I0223 22:22:12.687374 80620 request.go:622] Waited for 196.32053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
I0223 22:22:12.687452 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885-m03
I0223 22:22:12.687458 80620 round_trippers.go:469] Request Headers:
I0223 22:22:12.687466 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:12.687472 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:12.690923 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:12.690945 80620 round_trippers.go:577] Response Headers:
I0223 22:22:12.690952 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:12.690958 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:12.690963 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:12.690969 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:12 GMT
I0223 22:22:12.690975 80620 round_trippers.go:580] Audit-Id: f8604e33-edeb-42ae-8e19-5e27a6bd8d7d
I0223 22:22:12.690980 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:12.693472 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885-m03","uid":"22181ea8-5030-450a-9927-f28a8241ef6a","resourceVersion":"732","creationTimestamp":"2023-02-23T22:20:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4329 chars]
I0223 22:22:12.693842 80620 pod_ready.go:92] pod "kube-proxy-psgdt" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:12.693857 80620 pod_ready.go:81] duration metric: took 402.838971ms waiting for pod "kube-proxy-psgdt" in "kube-system" namespace to be "Ready" ...
I0223 22:22:12.693868 80620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:12.886856 80620 request.go:622] Waited for 192.90851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
I0223 22:22:12.886917 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773885
I0223 22:22:12.886932 80620 round_trippers.go:469] Request Headers:
I0223 22:22:12.886943 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:12.886952 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:12.893080 80620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0223 22:22:12.893102 80620 round_trippers.go:577] Response Headers:
I0223 22:22:12.893109 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:12 GMT
I0223 22:22:12.893115 80620 round_trippers.go:580] Audit-Id: 854e2fd9-4c25-4b2f-bc59-61d21fabfb74
I0223 22:22:12.893120 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:12.893125 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:12.893131 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:12.893136 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:12.893332 80620 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773885","namespace":"kube-system","uid":"ecc1fa39-40dc-4d57-be46-8e9a01431180","resourceVersion":"786","creationTimestamp":"2023-02-23T22:17:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.mirror":"ad8bcf66bd91c38b64df37533d4529bd","kubernetes.io/config.seen":"2023-02-23T22:17:38.195431871Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4892 chars]
I0223 22:22:13.087065 80620 request.go:622] Waited for 193.332526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:13.087127 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/multinode-773885
I0223 22:22:13.087133 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.087143 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.087153 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.091144 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:13.091162 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.091169 80620 round_trippers.go:580] Audit-Id: bf568af1-d7fc-4da0-9559-42a27fc0cef3
I0223 22:22:13.091175 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.091181 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.091186 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.091198 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.091210 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.091630 80620 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:17:34Z","fieldsType":"FieldsV1","fi [truncated 5313 chars]
I0223 22:22:13.091948 80620 pod_ready.go:92] pod "kube-scheduler-multinode-773885" in "kube-system" namespace has status "Ready":"True"
I0223 22:22:13.091980 80620 pod_ready.go:81] duration metric: took 398.085634ms waiting for pod "kube-scheduler-multinode-773885" in "kube-system" namespace to be "Ready" ...
I0223 22:22:13.091998 80620 pod_ready.go:38] duration metric: took 8.218220101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 22:22:13.092020 80620 api_server.go:51] waiting for apiserver process to appear ...
I0223 22:22:13.092066 80620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 22:22:13.104775 80620 command_runner.go:130] > 1675
I0223 22:22:13.104818 80620 api_server.go:71] duration metric: took 14.412044719s to wait for apiserver process to appear ...
I0223 22:22:13.104835 80620 api_server.go:87] waiting for apiserver healthz status ...
I0223 22:22:13.104847 80620 api_server.go:252] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
I0223 22:22:13.110111 80620 api_server.go:278] https://192.168.39.240:8443/healthz returned 200:
ok
I0223 22:22:13.110176 80620 round_trippers.go:463] GET https://192.168.39.240:8443/version
I0223 22:22:13.110187 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.110206 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.110217 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.110872 80620 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0223 22:22:13.110888 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.110895 80620 round_trippers.go:580] Audit-Id: 4f7ff6ce-bed0-47c2-918d-6dd15db9ce31
I0223 22:22:13.110901 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.110906 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.110911 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.110918 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.110923 80620 round_trippers.go:580] Content-Length: 263
I0223 22:22:13.110930 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.110950 80620 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.1",
"gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
"gitTreeState": "clean",
"buildDate": "2023-01-18T15:51:25Z",
"goVersion": "go1.19.5",
"compiler": "gc",
"platform": "linux/amd64"
}
I0223 22:22:13.111007 80620 api_server.go:140] control plane version: v1.26.1
I0223 22:22:13.111018 80620 api_server.go:130] duration metric: took 6.177354ms to wait for apiserver health ...
I0223 22:22:13.111024 80620 system_pods.go:43] waiting for kube-system pods to appear ...
I0223 22:22:13.287730 80620 request.go:622] Waited for 176.607463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:22:13.287780 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:22:13.287784 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.287794 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.287804 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.292061 80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0223 22:22:13.292080 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.292087 80620 round_trippers.go:580] Audit-Id: 8f903081-07eb-4386-b54e-2c988265836f
I0223 22:22:13.292096 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.292104 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.292110 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.292116 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.292121 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.294183 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"859"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82875 chars]
I0223 22:22:13.296686 80620 system_pods.go:59] 12 kube-system pods found
I0223 22:22:13.296706 80620 system_pods.go:61] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
I0223 22:22:13.296711 80620 system_pods.go:61] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running
I0223 22:22:13.296715 80620 system_pods.go:61] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
I0223 22:22:13.296719 80620 system_pods.go:61] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
I0223 22:22:13.296723 80620 system_pods.go:61] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
I0223 22:22:13.296727 80620 system_pods.go:61] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
I0223 22:22:13.296731 80620 system_pods.go:61] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running
I0223 22:22:13.296737 80620 system_pods.go:61] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
I0223 22:22:13.296741 80620 system_pods.go:61] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
I0223 22:22:13.296745 80620 system_pods.go:61] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
I0223 22:22:13.296750 80620 system_pods.go:61] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running
I0223 22:22:13.296754 80620 system_pods.go:61] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
I0223 22:22:13.296759 80620 system_pods.go:74] duration metric: took 185.729884ms to wait for pod list to return data ...
I0223 22:22:13.296768 80620 default_sa.go:34] waiting for default service account to be created ...
I0223 22:22:13.487059 80620 request.go:622] Waited for 190.213748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
I0223 22:22:13.487142 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
I0223 22:22:13.487151 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.487163 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.487179 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.490660 80620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0223 22:22:13.490686 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.490698 80620 round_trippers.go:580] Content-Length: 261
I0223 22:22:13.490707 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.490715 80620 round_trippers.go:580] Audit-Id: b33f914f-7659-4fc8-8f76-26f7e677ba77
I0223 22:22:13.490724 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.490733 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.490746 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.490755 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.490784 80620 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"860"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"62ac0740-2090-4217-a812-0d7ea88a967e","resourceVersion":"301","creationTimestamp":"2023-02-23T22:17:49Z"}}]}
I0223 22:22:13.491028 80620 default_sa.go:45] found service account: "default"
I0223 22:22:13.491048 80620 default_sa.go:55] duration metric: took 194.273065ms for default service account to be created ...
I0223 22:22:13.491059 80620 system_pods.go:116] waiting for k8s-apps to be running ...
I0223 22:22:13.687553 80620 request.go:622] Waited for 196.395892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:22:13.687624 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
I0223 22:22:13.687630 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.687642 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.687659 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.691923 80620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0223 22:22:13.691949 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.691960 80620 round_trippers.go:580] Audit-Id: b99f1d26-3de6-4548-9948-e1ef63d9e02a
I0223 22:22:13.691969 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.691980 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.691988 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.691997 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.692005 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.693522 80620 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"coredns-787d4945fb-ktr7h","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5337fe89-b5a2-4562-84e3-3a7e1f201ff5","resourceVersion":"844","creationTimestamp":"2023-02-23T22:17:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"53662893-5ffc-4dd0-ad22-0a60e1f2bff9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:17:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53662893-5ffc-4dd0-ad22-0a60e1f2bff9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82875 chars]
I0223 22:22:13.695955 80620 system_pods.go:86] 12 kube-system pods found
I0223 22:22:13.695978 80620 system_pods.go:89] "coredns-787d4945fb-ktr7h" [5337fe89-b5a2-4562-84e3-3a7e1f201ff5] Running
I0223 22:22:13.695985 80620 system_pods.go:89] "etcd-multinode-773885" [60237072-2e86-40a3-90d9-87b8bccfb848] Running
I0223 22:22:13.695993 80620 system_pods.go:89] "kindnet-fbfsf" [ee9a7e70-300e-4767-a949-fdfe5454dcfd] Running
I0223 22:22:13.695999 80620 system_pods.go:89] "kindnet-fg44s" [0b0a1b91-fd91-40af-8190-e7ba49a8fc0f] Running
I0223 22:22:13.696005 80620 system_pods.go:89] "kindnet-p64zr" [393cb53c-0242-40f7-af70-275ea6f9b40b] Running
I0223 22:22:13.696012 80620 system_pods.go:89] "kube-apiserver-multinode-773885" [f9cbb81f-f7c6-47e7-9e3c-393680d5ee52] Running
I0223 22:22:13.696020 80620 system_pods.go:89] "kube-controller-manager-multinode-773885" [df36fee9-6048-45f6-b17a-679c2c9e3daf] Running
I0223 22:22:13.696028 80620 system_pods.go:89] "kube-proxy-5d5vn" [f3dfcd7d-3514-4286-93e9-f51f9f91c2d7] Running
I0223 22:22:13.696040 80620 system_pods.go:89] "kube-proxy-mdjks" [d1cb3f4c-effa-4f0e-bbaa-ff792325a571] Running
I0223 22:22:13.696048 80620 system_pods.go:89] "kube-proxy-psgdt" [57d8204d-38f2-413f-8855-237db379cd27] Running
I0223 22:22:13.696055 80620 system_pods.go:89] "kube-scheduler-multinode-773885" [ecc1fa39-40dc-4d57-be46-8e9a01431180] Running
I0223 22:22:13.696061 80620 system_pods.go:89] "storage-provisioner" [62cc7ef3-a47f-45ce-a9af-cf4de3e1824d] Running
I0223 22:22:13.696071 80620 system_pods.go:126] duration metric: took 205.005964ms to wait for k8s-apps to be running ...
I0223 22:22:13.696085 80620 system_svc.go:44] waiting for kubelet service to be running ....
I0223 22:22:13.696135 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 22:22:13.709623 80620 system_svc.go:56] duration metric: took 13.531533ms WaitForService to wait for kubelet.
I0223 22:22:13.709679 80620 kubeadm.go:578] duration metric: took 15.016875282s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0223 22:22:13.709713 80620 node_conditions.go:102] verifying NodePressure condition ...
I0223 22:22:13.887138 80620 request.go:622] Waited for 177.351024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
I0223 22:22:13.887250 80620 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
I0223 22:22:13.887261 80620 round_trippers.go:469] Request Headers:
I0223 22:22:13.887269 80620 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0223 22:22:13.887276 80620 round_trippers.go:473] Accept: application/json, */*
I0223 22:22:13.889579 80620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0223 22:22:13.889601 80620 round_trippers.go:577] Response Headers:
I0223 22:22:13.889608 80620 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 8ba72ec5-1b2a-409d-bf22-b64137844518
I0223 22:22:13.889614 80620 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 6604e9e7-3e3f-49a9-8dac-e851673cdc90
I0223 22:22:13.889620 80620 round_trippers.go:580] Date: Thu, 23 Feb 2023 22:22:13 GMT
I0223 22:22:13.889625 80620 round_trippers.go:580] Audit-Id: 4402b5a7-68c0-489c-bf87-bedbd28a14fe
I0223 22:22:13.889631 80620 round_trippers.go:580] Cache-Control: no-cache, private
I0223 22:22:13.889636 80620 round_trippers.go:580] Content-Type: application/json
I0223 22:22:13.889855 80620 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"862"},"items":[{"metadata":{"name":"multinode-773885","uid":"230fa80a-71ce-4d35-b8e6-fbc8c35b441a","resourceVersion":"785","creationTimestamp":"2023-02-23T22:17:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-773885","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-773885","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_17_39_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16192 chars]
I0223 22:22:13.890436 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:22:13.890455 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:22:13.890468 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:22:13.890474 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:22:13.890481 80620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 22:22:13.890489 80620 node_conditions.go:123] node cpu capacity is 2
I0223 22:22:13.890496 80620 node_conditions.go:105] duration metric: took 180.777399ms to run NodePressure ...
I0223 22:22:13.890512 80620 start.go:228] waiting for startup goroutines ...
I0223 22:22:13.890522 80620 start.go:233] waiting for cluster config update ...
I0223 22:22:13.890533 80620 start.go:242] writing updated cluster config ...
I0223 22:22:13.890966 80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 22:22:13.891077 80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
I0223 22:22:13.893728 80620 out.go:177] * Starting worker node multinode-773885-m02 in cluster multinode-773885
I0223 22:22:13.895212 80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 22:22:13.895236 80620 cache.go:57] Caching tarball of preloaded images
I0223 22:22:13.895333 80620 preload.go:174] Found /home/jenkins/minikube-integration/15909-59858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0223 22:22:13.895345 80620 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0223 22:22:13.895468 80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
I0223 22:22:13.895625 80620 cache.go:193] Successfully downloaded all kic artifacts
I0223 22:22:13.895655 80620 start.go:364] acquiring machines lock for multinode-773885-m02: {Name:mk190e887b13a8e75fbaa786555e3f621b6db823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0223 22:22:13.895705 80620 start.go:368] acquired machines lock for "multinode-773885-m02" in 30.081µs
I0223 22:22:13.895724 80620 start.go:96] Skipping create...Using existing machine configuration
I0223 22:22:13.895732 80620 fix.go:55] fixHost starting: m02
I0223 22:22:13.896010 80620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 22:22:13.896038 80620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 22:22:13.910341 80620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
I0223 22:22:13.910796 80620 main.go:141] libmachine: () Calling .GetVersion
I0223 22:22:13.911318 80620 main.go:141] libmachine: Using API Version 1
I0223 22:22:13.911343 80620 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 22:22:13.911672 80620 main.go:141] libmachine: () Calling .GetMachineName
I0223 22:22:13.911860 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:13.911979 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetState
I0223 22:22:13.913566 80620 fix.go:103] recreateIfNeeded on multinode-773885-m02: state=Stopped err=<nil>
I0223 22:22:13.913585 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
W0223 22:22:13.913746 80620 fix.go:129] unexpected machine state, will restart: <nil>
I0223 22:22:13.915708 80620 out.go:177] * Restarting existing kvm2 VM for "multinode-773885-m02" ...
I0223 22:22:13.917009 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .Start
I0223 22:22:13.917151 80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring networks are active...
I0223 22:22:13.917783 80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring network default is active
I0223 22:22:13.918134 80620 main.go:141] libmachine: (multinode-773885-m02) Ensuring network mk-multinode-773885 is active
I0223 22:22:13.918457 80620 main.go:141] libmachine: (multinode-773885-m02) Getting domain xml...
I0223 22:22:13.919047 80620 main.go:141] libmachine: (multinode-773885-m02) Creating domain...
I0223 22:22:15.148655 80620 main.go:141] libmachine: (multinode-773885-m02) Waiting to get IP...
I0223 22:22:15.149521 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:15.149889 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:15.149974 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.149904 80738 retry.go:31] will retry after 193.258579ms: waiting for machine to come up
I0223 22:22:15.344335 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:15.344701 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:15.344731 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.344650 80738 retry.go:31] will retry after 325.897575ms: waiting for machine to come up
I0223 22:22:15.672194 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:15.672594 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:15.672628 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:15.672550 80738 retry.go:31] will retry after 464.389068ms: waiting for machine to come up
I0223 22:22:16.138184 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:16.138690 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:16.138753 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:16.138682 80738 retry.go:31] will retry after 418.748231ms: waiting for machine to come up
I0223 22:22:16.559096 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:16.559605 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:16.559635 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:16.559550 80738 retry.go:31] will retry after 471.42311ms: waiting for machine to come up
I0223 22:22:17.033003 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:17.033388 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:17.033425 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:17.033349 80738 retry.go:31] will retry after 716.223287ms: waiting for machine to come up
I0223 22:22:17.751192 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:17.751627 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:17.751662 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:17.751564 80738 retry.go:31] will retry after 829.526019ms: waiting for machine to come up
I0223 22:22:18.582469 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:18.582861 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:18.582893 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:18.582810 80738 retry.go:31] will retry after 1.314736274s: waiting for machine to come up
I0223 22:22:19.898527 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:19.898968 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:19.898996 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:19.898923 80738 retry.go:31] will retry after 1.848898641s: waiting for machine to come up
I0223 22:22:21.749410 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:21.749799 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:21.749831 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:21.749746 80738 retry.go:31] will retry after 1.422968619s: waiting for machine to come up
I0223 22:22:23.174280 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:23.174762 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:23.174796 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:23.174689 80738 retry.go:31] will retry after 2.26457317s: waiting for machine to come up
I0223 22:22:25.440649 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:25.441040 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:25.441077 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:25.441025 80738 retry.go:31] will retry after 2.412299301s: waiting for machine to come up
I0223 22:22:27.856562 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:27.857000 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | unable to find current IP address of domain multinode-773885-m02 in network mk-multinode-773885
I0223 22:22:27.857029 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | I0223 22:22:27.856943 80738 retry.go:31] will retry after 3.510265055s: waiting for machine to come up
I0223 22:22:31.369182 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.369590 80620 main.go:141] libmachine: (multinode-773885-m02) Found IP for machine: 192.168.39.102
I0223 22:22:31.369622 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has current primary IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.369632 80620 main.go:141] libmachine: (multinode-773885-m02) Reserving static IP address...
I0223 22:22:31.370012 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "multinode-773885-m02", mac: "52:54:00:b1:bb:00", ip: "192.168.39.102"} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.370035 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | skip adding static IP to network mk-multinode-773885 - found existing host DHCP lease matching {name: "multinode-773885-m02", mac: "52:54:00:b1:bb:00", ip: "192.168.39.102"}
I0223 22:22:31.370045 80620 main.go:141] libmachine: (multinode-773885-m02) Reserved static IP address: 192.168.39.102
I0223 22:22:31.370056 80620 main.go:141] libmachine: (multinode-773885-m02) Waiting for SSH to be available...
I0223 22:22:31.370068 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Getting to WaitForSSH function...
I0223 22:22:31.372076 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.372417 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.372440 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.372551 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Using SSH client type: external
I0223 22:22:31.372572 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa (-rw-------)
I0223 22:22:31.372608 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0223 22:22:31.372622 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | About to run SSH command:
I0223 22:22:31.372638 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | exit 0
I0223 22:22:31.506747 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | SSH cmd err, output: <nil>:
I0223 22:22:31.507041 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetConfigRaw
I0223 22:22:31.507719 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
I0223 22:22:31.510014 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.510356 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.510390 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.510652 80620 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-59858/.minikube/profiles/multinode-773885/config.json ...
I0223 22:22:31.510883 80620 machine.go:88] provisioning docker machine ...
I0223 22:22:31.510909 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:31.511142 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
I0223 22:22:31.511321 80620 buildroot.go:166] provisioning hostname "multinode-773885-m02"
I0223 22:22:31.511339 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
I0223 22:22:31.511489 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:31.513584 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.513939 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.513969 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.514122 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:31.514268 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:31.514404 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:31.514532 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:31.514655 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:31.515234 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:31.515255 80620 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-773885-m02 && echo "multinode-773885-m02" | sudo tee /etc/hostname
I0223 22:22:31.655693 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773885-m02
I0223 22:22:31.655725 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:31.658407 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.658788 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.658815 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.658999 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:31.659184 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:31.659347 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:31.659464 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:31.659613 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:31.660176 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:31.660212 80620 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-773885-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773885-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-773885-m02' | sudo tee -a /etc/hosts;
fi
fi
I0223 22:22:31.799792 80620 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 22:22:31.799859 80620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-59858/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-59858/.minikube}
I0223 22:22:31.799879 80620 buildroot.go:174] setting up certificates
I0223 22:22:31.799889 80620 provision.go:83] configureAuth start
I0223 22:22:31.799902 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetMachineName
I0223 22:22:31.800252 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
I0223 22:22:31.803534 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.803989 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.804018 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.804274 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:31.806753 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.807088 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:31.807121 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:31.807237 80620 provision.go:138] copyHostCerts
I0223 22:22:31.807268 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
I0223 22:22:31.807311 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem, removing ...
I0223 22:22:31.807324 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem
I0223 22:22:31.807414 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/key.pem (1671 bytes)
I0223 22:22:31.807572 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
I0223 22:22:31.807597 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem, removing ...
I0223 22:22:31.807602 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem
I0223 22:22:31.807632 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/ca.pem (1078 bytes)
I0223 22:22:31.807685 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
I0223 22:22:31.807702 80620 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem, removing ...
I0223 22:22:31.807707 80620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem
I0223 22:22:31.807729 80620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-59858/.minikube/cert.pem (1123 bytes)
I0223 22:22:31.807773 80620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca-key.pem org=jenkins.multinode-773885-m02 san=[192.168.39.102 192.168.39.102 localhost 127.0.0.1 minikube multinode-773885-m02]
I0223 22:22:32.063720 80620 provision.go:172] copyRemoteCerts
I0223 22:22:32.063776 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0223 22:22:32.063800 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:32.066310 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.066712 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:32.066742 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.066876 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:32.067090 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.067230 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:32.067359 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
I0223 22:22:32.161807 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0223 22:22:32.161874 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0223 22:22:32.184819 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem -> /etc/docker/server.pem
I0223 22:22:32.184883 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0223 22:22:32.206537 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0223 22:22:32.206625 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0223 22:22:32.228031 80620 provision.go:86] duration metric: configureAuth took 428.129514ms
I0223 22:22:32.228052 80620 buildroot.go:189] setting minikube options for container-runtime
I0223 22:22:32.228295 80620 config.go:182] Loaded profile config "multinode-773885": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 22:22:32.228322 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:32.228634 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:32.231144 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.231489 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:32.231520 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.231601 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:32.231819 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.231999 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.232117 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:32.232312 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:32.232708 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:32.232719 80620 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0223 22:22:32.365102 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0223 22:22:32.365122 80620 buildroot.go:70] root file system type: tmpfs
I0223 22:22:32.365241 80620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0223 22:22:32.365265 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:32.367818 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.368241 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:32.368263 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.368492 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:32.368703 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.368872 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.368982 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:32.369180 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:32.369581 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:32.369639 80620 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.240"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0223 22:22:32.513495 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.240
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0223 22:22:32.513523 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:32.515906 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.516266 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:32.516300 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:32.516468 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:32.516680 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.516873 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:32.517028 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:32.517178 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:32.517625 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:32.517648 80620 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0223 22:22:33.354684 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0223 22:22:33.354711 80620 machine.go:91] provisioned docker machine in 1.843811829s
I0223 22:22:33.354721 80620 start.go:300] post-start starting for "multinode-773885-m02" (driver="kvm2")
I0223 22:22:33.354729 80620 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0223 22:22:33.354752 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:33.355077 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0223 22:22:33.355108 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:33.357808 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.358150 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:33.358170 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.358307 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:33.358509 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:33.358697 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:33.358856 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
I0223 22:22:33.452337 80620 ssh_runner.go:195] Run: cat /etc/os-release
I0223 22:22:33.456207 80620 command_runner.go:130] > NAME=Buildroot
I0223 22:22:33.456227 80620 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
I0223 22:22:33.456233 80620 command_runner.go:130] > ID=buildroot
I0223 22:22:33.456241 80620 command_runner.go:130] > VERSION_ID=2021.02.12
I0223 22:22:33.456248 80620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0223 22:22:33.456287 80620 info.go:137] Remote host: Buildroot 2021.02.12
I0223 22:22:33.456303 80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/addons for local assets ...
I0223 22:22:33.456371 80620 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-59858/.minikube/files for local assets ...
I0223 22:22:33.456462 80620 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> 669272.pem in /etc/ssl/certs
I0223 22:22:33.456474 80620 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem -> /etc/ssl/certs/669272.pem
I0223 22:22:33.456577 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0223 22:22:33.464384 80620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-59858/.minikube/files/etc/ssl/certs/669272.pem --> /etc/ssl/certs/669272.pem (1708 bytes)
I0223 22:22:33.486196 80620 start.go:303] post-start completed in 131.456152ms
I0223 22:22:33.486221 80620 fix.go:57] fixHost completed within 19.590489491s
I0223 22:22:33.486246 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:33.488925 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.489233 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:33.489259 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.489444 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:33.489642 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:33.489819 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:33.489958 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:33.490087 80620 main.go:141] libmachine: Using SSH client type: native
I0223 22:22:33.490502 80620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.39.102 22 <nil> <nil>}
I0223 22:22:33.490517 80620 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0223 22:22:33.619595 80620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677190953.568894594
I0223 22:22:33.619615 80620 fix.go:207] guest clock: 1677190953.568894594
I0223 22:22:33.619622 80620 fix.go:220] Guest: 2023-02-23 22:22:33.568894594 +0000 UTC Remote: 2023-02-23 22:22:33.48622588 +0000 UTC m=+80.262153220 (delta=82.668714ms)
I0223 22:22:33.619636 80620 fix.go:191] guest clock delta is within tolerance: 82.668714ms
I0223 22:22:33.619643 80620 start.go:83] releasing machines lock for "multinode-773885-m02", held for 19.723927358s
I0223 22:22:33.619668 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:33.619923 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetIP
I0223 22:22:33.622598 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.623025 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:33.623058 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.625082 80620 out.go:177] * Found network options:
I0223 22:22:33.626668 80620 out.go:177] - NO_PROXY=192.168.39.240
W0223 22:22:33.628011 80620 proxy.go:119] fail to check proxy env: Error ip not in block
I0223 22:22:33.628044 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:33.628608 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:33.628794 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .DriverName
I0223 22:22:33.628886 80620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0223 22:22:33.628929 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
W0223 22:22:33.629039 80620 proxy.go:119] fail to check proxy env: Error ip not in block
I0223 22:22:33.629123 80620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0223 22:22:33.629150 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHHostname
I0223 22:22:33.631754 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.631877 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.632173 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:33.632199 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.632233 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:bb:00", ip: ""} in network mk-multinode-773885: {Iface:virbr1 ExpiryTime:2023-02-23 23:22:25 +0000 UTC Type:0 Mac:52:54:00:b1:bb:00 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-773885-m02 Clientid:01:52:54:00:b1:bb:00}
I0223 22:22:33.632253 80620 main.go:141] libmachine: (multinode-773885-m02) DBG | domain multinode-773885-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:b1:bb:00 in network mk-multinode-773885
I0223 22:22:33.632406 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:33.632530 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHPort
I0223 22:22:33.632612 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:33.632687 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHKeyPath
I0223 22:22:33.632797 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:33.632952 80620 main.go:141] libmachine: (multinode-773885-m02) Calling .GetSSHUsername
I0223 22:22:33.632945 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
I0223 22:22:33.633068 80620 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-59858/.minikube/machines/multinode-773885-m02/id_rsa Username:docker}
I0223 22:22:33.747533 80620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0223 22:22:33.748590 80620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0223 22:22:33.748617 80620 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0223 22:22:33.748665 80620 ssh_runner.go:195] Run: which cri-dockerd
I0223 22:22:33.752644 80620 command_runner.go:130] > /usr/bin/cri-dockerd
I0223 22:22:33.752772 80620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0223 22:22:33.762613 80620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0223 22:22:33.779129 80620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0223 22:22:33.794495 80620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0223 22:22:33.794614 80620 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0223 22:22:33.794634 80620 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 22:22:33.794710 80620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 22:22:33.819645 80620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0223 22:22:33.819665 80620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0223 22:22:33.819671 80620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0223 22:22:33.819676 80620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0223 22:22:33.819680 80620 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0223 22:22:33.819684 80620 command_runner.go:130] > registry.k8s.io/pause:3.9
I0223 22:22:33.819688 80620 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
I0223 22:22:33.819694 80620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0223 22:22:33.819697 80620 command_runner.go:130] > registry.k8s.io/pause:3.6
I0223 22:22:33.819702 80620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0223 22:22:33.819707 80620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0223 22:22:33.821344 80620 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0223 22:22:33.821366 80620 docker.go:560] Images already preloaded, skipping extraction
I0223 22:22:33.821378 80620 start.go:485] detecting cgroup driver to use...
I0223 22:22:33.821513 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 22:22:33.838092 80620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0223 22:22:33.838113 80620 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0223 22:22:33.838173 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0223 22:22:33.849104 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0223 22:22:33.860042 80620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0223 22:22:33.860082 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0223 22:22:33.871017 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 22:22:33.881892 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0223 22:22:33.892548 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 22:22:33.903374 80620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0223 22:22:33.914628 80620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0223 22:22:33.925877 80620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0223 22:22:33.935581 80620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0223 22:22:33.935636 80620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0223 22:22:33.945618 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:22:34.050114 80620 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 22:22:34.068154 80620 start.go:485] detecting cgroup driver to use...
I0223 22:22:34.068229 80620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0223 22:22:34.089986 80620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0223 22:22:34.090009 80620 command_runner.go:130] > [Unit]
I0223 22:22:34.090019 80620 command_runner.go:130] > Description=Docker Application Container Engine
I0223 22:22:34.090033 80620 command_runner.go:130] > Documentation=https://docs.docker.com
I0223 22:22:34.090041 80620 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0223 22:22:34.090049 80620 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0223 22:22:34.090056 80620 command_runner.go:130] > StartLimitBurst=3
I0223 22:22:34.090063 80620 command_runner.go:130] > StartLimitIntervalSec=60
I0223 22:22:34.090072 80620 command_runner.go:130] > [Service]
I0223 22:22:34.090083 80620 command_runner.go:130] > Type=notify
I0223 22:22:34.090089 80620 command_runner.go:130] > Restart=on-failure
I0223 22:22:34.090104 80620 command_runner.go:130] > Environment=NO_PROXY=192.168.39.240
I0223 22:22:34.090111 80620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0223 22:22:34.090118 80620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0223 22:22:34.090150 80620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0223 22:22:34.090164 80620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0223 22:22:34.090170 80620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0223 22:22:34.090176 80620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0223 22:22:34.090182 80620 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0223 22:22:34.090190 80620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0223 22:22:34.090196 80620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0223 22:22:34.090200 80620 command_runner.go:130] > ExecStart=
I0223 22:22:34.090213 80620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0223 22:22:34.090219 80620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0223 22:22:34.090224 80620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0223 22:22:34.090233 80620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0223 22:22:34.090237 80620 command_runner.go:130] > LimitNOFILE=infinity
I0223 22:22:34.090241 80620 command_runner.go:130] > LimitNPROC=infinity
I0223 22:22:34.090245 80620 command_runner.go:130] > LimitCORE=infinity
I0223 22:22:34.090251 80620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0223 22:22:34.090256 80620 command_runner.go:130] > # Only systemd 226 and above support this version.
I0223 22:22:34.090260 80620 command_runner.go:130] > TasksMax=infinity
I0223 22:22:34.090265 80620 command_runner.go:130] > TimeoutStartSec=0
I0223 22:22:34.090273 80620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0223 22:22:34.090279 80620 command_runner.go:130] > Delegate=yes
I0223 22:22:34.090285 80620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0223 22:22:34.090293 80620 command_runner.go:130] > KillMode=process
I0223 22:22:34.090297 80620 command_runner.go:130] > [Install]
I0223 22:22:34.090302 80620 command_runner.go:130] > WantedBy=multi-user.target
I0223 22:22:34.090359 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0223 22:22:34.105030 80620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0223 22:22:34.126591 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0223 22:22:34.140060 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 22:22:34.153929 80620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0223 22:22:34.184699 80620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 22:22:34.197888 80620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 22:22:34.214560 80620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0223 22:22:34.214588 80620 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0223 22:22:34.214922 80620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0223 22:22:34.314415 80620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0223 22:22:34.423777 80620 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0223 22:22:34.423812 80620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0223 22:22:34.439350 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:22:34.539377 80620 ssh_runner.go:195] Run: sudo systemctl restart docker
I0223 22:22:35.976151 80620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.436733266s)
I0223 22:22:35.976218 80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0223 22:22:36.088366 80620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0223 22:22:36.208338 80620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0223 22:22:36.318554 80620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 22:22:36.423882 80620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0223 22:22:36.438700 80620 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
I0223 22:22:36.441277 80620 out.go:177]
W0223 22:22:36.442813 80620 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0223 22:22:36.442833 80620 out.go:239] *
W0223 22:22:36.443730 80620 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0223 22:22:36.445382 80620 out.go:177]
*
* ==> Docker <==
* -- Journal begins at Thu 2023-02-23 22:21:24 UTC, ends at Thu 2023-02-23 22:22:37 UTC. --
Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653197396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653344660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653370552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 22:21:58 multinode-773885 dockerd[833]: time="2023-02-23T22:21:58.653655096Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6c05479ab6bded8fa4b510984ebdaff14f9e940ce5f996cbbfa74f89cdf0e4df pid=2349 runtime=io.containerd.runc.v2
Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.976478317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.976529296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.976538800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 22:22:09 multinode-773885 dockerd[833]: time="2023-02-23T22:22:09.977357166Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/08db8c8fe66700151ca6e921ec0c7827f3f8b9da2185e6f9b77717b3db2213a2 pid=2641 runtime=io.containerd.runc.v2
Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.562985619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.563244746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.563254901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 22:22:10 multinode-773885 dockerd[833]: time="2023-02-23T22:22:10.563554212Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/17bc89f184c67734f2c7bf76e9475c45856ec85a6cc69703a04036b48218a306 pid=2718 runtime=io.containerd.runc.v2
Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277252833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277345995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277367820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.277588969Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9f2502586a39c34ac304fe5d1a3c0d2111c439b907e9f9955feec5ca5504872d pid=2837 runtime=io.containerd.runc.v2
Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887734997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887789077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887798415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 22:22:11 multinode-773885 dockerd[833]: time="2023-02-23T22:22:11.887932649Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ec64ae912e0437233e2ff6d3d8ed0b5e64201755fd0b86f988efacd563ac301c pid=2935 runtime=io.containerd.runc.v2
Feb 23 22:22:26 multinode-773885 dockerd[827]: time="2023-02-23T22:22:26.143265689Z" level=info msg="ignoring event" container=27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.144112416Z" level=info msg="shim disconnected" id=27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a
Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.144166893Z" level=warning msg="cleaning up after shim disconnected" id=27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a namespace=moby
Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.144202001Z" level=info msg="cleaning up dead shim"
Feb 23 22:22:26 multinode-773885 dockerd[833]: time="2023-02-23T22:22:26.167427651Z" level=warning msg="cleanup warnings time=\"2023-02-23T22:22:26Z\" level=info msg=\"starting signal loop\" namespace=moby pid=3166 runtime=io.containerd.runc.v2\n"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
ec64ae912e043 8c811b4aec35f 26 seconds ago Running busybox 1 9f2502586a39c
17bc89f184c67 5185b96f0becf 27 seconds ago Running coredns 1 08db8c8fe6670
6c05479ab6bde d6e3e26021b60 39 seconds ago Running kindnet-cni 1 e749663c5c7e7
27a3e00db0cef 6e38f40d628db 42 seconds ago Exited storage-provisioner 1 bc303f21527d1
9454f57758e35 46a6bb3c77ce0 42 seconds ago Running kube-proxy 1 7cce6a3412d50
1e657e364abdc fce326961ae2d 48 seconds ago Running etcd 1 9832634b69a74
efd94ac044a0a 655493523f607 48 seconds ago Running kube-scheduler 1 6464d18d96882
6c70297f99403 e9c08e11b07f6 48 seconds ago Running kube-controller-manager 1 bff62e4487a30
1f74fa3dd2e7b deb04688c4a35 48 seconds ago Running kube-apiserver 1 4d2cd9fe6c8db
80d446e21be45 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 3 minutes ago Exited busybox 0 ebbb7d19d9aa3
a31cf43457e01 5185b96f0becf 4 minutes ago Exited coredns 0 75e472928e30d
f6b2b873cba93 kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe 4 minutes ago Exited kindnet-cni 0 f284ce294fa00
6becaf5c86404 46a6bb3c77ce0 4 minutes ago Exited kube-proxy 0 a2a9a29b5a412
8d29ee663e61d fce326961ae2d 5 minutes ago Exited etcd 0 3b6e6d975efae
baad115b76c60 655493523f607 5 minutes ago Exited kube-scheduler 0 072b5f08a10f2
53723346fe3cc e9c08e11b07f6 5 minutes ago Exited kube-controller-manager 0 979e703c6176a
6a41aad932999 deb04688c4a35 5 minutes ago Exited kube-apiserver 0 745d6ec7adf4b
*
* ==> coredns [17bc89f184c6] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:60321 - 9770 "HINFO IN 6662394053686617131.163874164669885542. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.069250639s
*
* ==> coredns [a31cf43457e0] <==
* [INFO] 10.244.1.2:47000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001758837s
[INFO] 10.244.1.2:44690 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131616s
[INFO] 10.244.1.2:37067 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011391s
[INFO] 10.244.1.2:38424 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001108385s
[INFO] 10.244.1.2:47838 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000089356s
[INFO] 10.244.1.2:41552 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106594s
[INFO] 10.244.1.2:51630 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135553s
[INFO] 10.244.0.3:55853 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122391s
[INFO] 10.244.0.3:35953 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008752s
[INFO] 10.244.0.3:56239 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083093s
[INFO] 10.244.0.3:38385 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083481s
[INFO] 10.244.1.2:53920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283555s
[INFO] 10.244.1.2:34363 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000773507s
[INFO] 10.244.1.2:54662 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081096s
[INFO] 10.244.1.2:48627 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000266217s
[INFO] 10.244.0.3:54203 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197101s
[INFO] 10.244.0.3:52399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162405s
[INFO] 10.244.0.3:45614 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000234431s
[INFO] 10.244.0.3:47751 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000134862s
[INFO] 10.244.1.2:53869 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201736s
[INFO] 10.244.1.2:43680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175885s
[INFO] 10.244.1.2:45494 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167403s
[INFO] 10.244.1.2:52027 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017095s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: multinode-773885
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-773885
kubernetes.io/os=linux
minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0
minikube.k8s.io/name=multinode-773885
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_02_23T22_17_39_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 23 Feb 2023 22:17:34 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-773885
AcquireTime: <unset>
RenewTime: Thu, 23 Feb 2023 22:22:34 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 23 Feb 2023 22:22:04 +0000 Thu, 23 Feb 2023 22:17:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 23 Feb 2023 22:22:04 +0000 Thu, 23 Feb 2023 22:17:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 23 Feb 2023 22:22:04 +0000 Thu, 23 Feb 2023 22:17:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 23 Feb 2023 22:22:04 +0000 Thu, 23 Feb 2023 22:22:04 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.240
Hostname: multinode-773885
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 1475187eff99446eb4f7e011051cc8fa
System UUID: 1475187e-ff99-446e-b4f7-e011051cc8fa
Boot ID: 4d4d0a54-af2e-49a7-a9dd-250c866abcb4
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-6b86dd6d48-9b7sp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m35s
kube-system coredns-787d4945fb-ktr7h 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 4m47s
kube-system etcd-multinode-773885 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 4m59s
kube-system kindnet-p64zr 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 4m47s
kube-system kube-apiserver-multinode-773885 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m1s
kube-system kube-controller-manager-multinode-773885 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m59s
kube-system kube-proxy-mdjks 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m47s
kube-system kube-scheduler-multinode-773885 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m59s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m46s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m45s kube-proxy
Normal Starting 41s kube-proxy
Normal NodeHasSufficientMemory 5m12s (x5 over 5m12s) kubelet Node multinode-773885 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m12s (x5 over 5m12s) kubelet Node multinode-773885 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m12s (x5 over 5m12s) kubelet Node multinode-773885 status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 4m59s kubelet Node multinode-773885 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 4m59s kubelet Node multinode-773885 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m59s kubelet Node multinode-773885 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 4m59s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m59s kubelet Starting kubelet.
Normal RegisteredNode 4m48s node-controller Node multinode-773885 event: Registered Node multinode-773885 in Controller
Normal NodeReady 4m36s kubelet Node multinode-773885 status is now: NodeReady
Normal Starting 50s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 49s (x8 over 49s) kubelet Node multinode-773885 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 49s (x8 over 49s) kubelet Node multinode-773885 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 49s (x7 over 49s) kubelet Node multinode-773885 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 49s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 31s node-controller Node multinode-773885 event: Registered Node multinode-773885 in Controller
Name: multinode-773885-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-773885-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 23 Feb 2023 22:18:46 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-773885-m02
AcquireTime: <unset>
RenewTime: Thu, 23 Feb 2023 22:20:38 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 23 Feb 2023 22:19:17 +0000 Thu, 23 Feb 2023 22:18:46 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 23 Feb 2023 22:19:17 +0000 Thu, 23 Feb 2023 22:18:46 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 23 Feb 2023 22:19:17 +0000 Thu, 23 Feb 2023 22:18:46 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 23 Feb 2023 22:19:17 +0000 Thu, 23 Feb 2023 22:18:59 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.102
Hostname: multinode-773885-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: fb9064ecea5b4e79869f499ba8bce75c
System UUID: fb9064ec-ea5b-4e79-869f-499ba8bce75c
Boot ID: 4be4ac98-4af3-4b16-af45-9c05c30bb17d
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-6b86dd6d48-zscjg 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m35s
kube-system kindnet-fg44s 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 3m51s
kube-system kube-proxy-5d5vn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m51s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3m48s kube-proxy
Normal Starting 3m51s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m51s (x2 over 3m51s) kubelet Node multinode-773885-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m51s (x2 over 3m51s) kubelet Node multinode-773885-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m51s (x2 over 3m51s) kubelet Node multinode-773885-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m51s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 3m48s node-controller Node multinode-773885-m02 event: Registered Node multinode-773885-m02 in Controller
Normal NodeReady 3m38s kubelet Node multinode-773885-m02 status is now: NodeReady
Normal RegisteredNode 31s node-controller Node multinode-773885-m02 event: Registered Node multinode-773885-m02 in Controller
Name: multinode-773885-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-773885-m03
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 23 Feb 2023 22:20:34 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-773885-m03
AcquireTime: <unset>
RenewTime: Thu, 23 Feb 2023 22:20:43 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 23 Feb 2023 22:20:42 +0000 Thu, 23 Feb 2023 22:20:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 23 Feb 2023 22:20:42 +0000 Thu, 23 Feb 2023 22:20:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 23 Feb 2023 22:20:42 +0000 Thu, 23 Feb 2023 22:20:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 23 Feb 2023 22:20:42 +0000 Thu, 23 Feb 2023 22:20:42 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.58
Hostname: multinode-773885-m03
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: c591c169912649639566ebe598459857
System UUID: c591c169-9126-4963-9566-ebe598459857
Boot ID: 100c5981-611e-4766-903a-70dbe2627dfb
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.3.0/24
PodCIDRs: 10.244.3.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-fbfsf 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 2m51s
kube-system kube-proxy-psgdt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m51s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m48s kube-proxy
Normal Starting 2m1s kube-proxy
Normal NodeHasNoDiskPressure 2m51s (x2 over 2m51s) kubelet Node multinode-773885-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m51s (x2 over 2m51s) kubelet Node multinode-773885-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m51s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2m51s (x2 over 2m51s) kubelet Node multinode-773885-m03 status is now: NodeHasSufficientMemory
Normal Starting 2m51s kubelet Starting kubelet.
Normal NodeReady 2m38s kubelet Node multinode-773885-m03 status is now: NodeReady
Normal Starting 2m4s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m4s (x2 over 2m4s) kubelet Node multinode-773885-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m4s (x2 over 2m4s) kubelet Node multinode-773885-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m4s (x2 over 2m4s) kubelet Node multinode-773885-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m4s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 115s kubelet Node multinode-773885-m03 status is now: NodeReady
Normal RegisteredNode 31s node-controller Node multinode-773885-m03 event: Registered Node multinode-773885-m03 in Controller
*
* ==> dmesg <==
* [Feb23 22:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.071531] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.955731] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.280486] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.148289] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.553293] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +5.692232] systemd-fstab-generator[510]: Ignoring "noauto" for root device
[ +0.095720] systemd-fstab-generator[527]: Ignoring "noauto" for root device
[ +1.185288] systemd-fstab-generator[758]: Ignoring "noauto" for root device
[ +0.248453] systemd-fstab-generator[792]: Ignoring "noauto" for root device
[ +0.102398] systemd-fstab-generator[803]: Ignoring "noauto" for root device
[ +0.122364] systemd-fstab-generator[816]: Ignoring "noauto" for root device
[ +1.531595] systemd-fstab-generator[987]: Ignoring "noauto" for root device
[ +0.111043] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
[ +0.104179] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
[ +0.097652] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
[ +11.667470] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
[ +0.392417] kauditd_printk_skb: 67 callbacks suppressed
[ +8.206240] kauditd_printk_skb: 8 callbacks suppressed
[Feb23 22:22] kauditd_printk_skb: 16 callbacks suppressed
*
* ==> etcd [1e657e364abd] <==
* {"level":"info","ts":"2023-02-23T22:21:50.930Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-02-23T22:21:50.930Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-02-23T22:21:50.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 switched to configuration voters=(2080375272429567737)"}
{"level":"info","ts":"2023-02-23T22:21:50.932Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","added-peer-id":"1cdefa49b8abbef9","added-peer-peer-urls":["https://192.168.39.240:2380"]}
{"level":"info","ts":"2023-02-23T22:21:50.933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T22:21:50.934Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T22:21:50.954Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-02-23T22:21:50.955Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"1cdefa49b8abbef9","initial-advertise-peer-urls":["https://192.168.39.240:2380"],"listen-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.240:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-02-23T22:21:50.955Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.240:2380"}
{"level":"info","ts":"2023-02-23T22:21:50.958Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.240:2380"}
{"level":"info","ts":"2023-02-23T22:21:50.955Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 is starting a new election at term 2"}
{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became pre-candidate at term 2"}
{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 received MsgPreVoteResp from 1cdefa49b8abbef9 at term 2"}
{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became candidate at term 3"}
{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 received MsgVoteResp from 1cdefa49b8abbef9 at term 3"}
{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became leader at term 3"}
{"level":"info","ts":"2023-02-23T22:21:52.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1cdefa49b8abbef9 elected leader 1cdefa49b8abbef9 at term 3"}
{"level":"info","ts":"2023-02-23T22:21:52.080Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"1cdefa49b8abbef9","local-member-attributes":"{Name:multinode-773885 ClientURLs:[https://192.168.39.240:2379]}","request-path":"/0/members/1cdefa49b8abbef9/attributes","cluster-id":"e0745912b0778b6e","publish-timeout":"7s"}
{"level":"info","ts":"2023-02-23T22:21:52.080Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T22:21:52.081Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-02-23T22:21:52.081Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-02-23T22:21:52.081Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T22:21:52.083Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.240:2379"}
{"level":"info","ts":"2023-02-23T22:21:52.084Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> etcd [8d29ee663e61] <==
* {"level":"info","ts":"2023-02-23T22:17:32.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became candidate at term 2"}
{"level":"info","ts":"2023-02-23T22:17:32.479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 received MsgVoteResp from 1cdefa49b8abbef9 at term 2"}
{"level":"info","ts":"2023-02-23T22:17:32.479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 became leader at term 2"}
{"level":"info","ts":"2023-02-23T22:17:32.479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1cdefa49b8abbef9 elected leader 1cdefa49b8abbef9 at term 2"}
{"level":"info","ts":"2023-02-23T22:17:32.484Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T22:17:32.487Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"1cdefa49b8abbef9","local-member-attributes":"{Name:multinode-773885 ClientURLs:[https://192.168.39.240:2379]}","request-path":"/0/members/1cdefa49b8abbef9/attributes","cluster-id":"e0745912b0778b6e","publish-timeout":"7s"}
{"level":"info","ts":"2023-02-23T22:17:32.488Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T22:17:32.492Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.240:2379"}
{"level":"info","ts":"2023-02-23T22:17:32.489Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T22:17:32.496Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-02-23T22:17:32.489Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T22:17:32.503Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T22:17:32.504Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-02-23T22:17:32.504Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-02-23T22:17:32.507Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"warn","ts":"2023-02-23T22:18:39.794Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"154.910442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-02-23T22:18:39.794Z","caller":"traceutil/trace.go:171","msg":"trace[1229332276] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:443; }","duration":"155.153979ms","start":"2023-02-23T22:18:39.639Z","end":"2023-02-23T22:18:39.794Z","steps":["trace[1229332276] 'range keys from in-memory index tree' (duration: 154.79846ms)"],"step_count":1}
{"level":"info","ts":"2023-02-23T22:19:39.387Z","caller":"traceutil/trace.go:171","msg":"trace[841849164] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"239.425375ms","start":"2023-02-23T22:19:39.147Z","end":"2023-02-23T22:19:39.387Z","steps":["trace[841849164] 'process raft request' (duration: 239.262494ms)"],"step_count":1}
{"level":"info","ts":"2023-02-23T22:19:41.080Z","caller":"traceutil/trace.go:171","msg":"trace[146502320] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"106.873274ms","start":"2023-02-23T22:19:40.973Z","end":"2023-02-23T22:19:41.080Z","steps":["trace[146502320] 'process raft request' (duration: 106.732936ms)"],"step_count":1}
{"level":"info","ts":"2023-02-23T22:20:45.246Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-02-23T22:20:45.246Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"multinode-773885","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"]}
{"level":"info","ts":"2023-02-23T22:20:45.273Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1cdefa49b8abbef9","current-leader-member-id":"1cdefa49b8abbef9"}
{"level":"info","ts":"2023-02-23T22:20:45.277Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.39.240:2380"}
{"level":"info","ts":"2023-02-23T22:20:45.285Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.39.240:2380"}
{"level":"info","ts":"2023-02-23T22:20:45.285Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"multinode-773885","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"]}
*
* ==> kernel <==
* 22:22:37 up 1 min, 0 users, load average: 0.60, 0.19, 0.07
Linux multinode-773885 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kindnet [6c05479ab6bd] <==
* I0223 22:21:59.628331 1 main.go:227] handling current node
I0223 22:21:59.629191 1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
I0223 22:21:59.629202 1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24]
I0223 22:21:59.629410 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.102 Flags: [] Table: 0}
I0223 22:21:59.629537 1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
I0223 22:21:59.629545 1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24]
I0223 22:21:59.629690 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.58 Flags: [] Table: 0}
I0223 22:22:09.634203 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0223 22:22:09.634224 1 main.go:227] handling current node
I0223 22:22:09.634233 1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
I0223 22:22:09.634237 1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24]
I0223 22:22:09.634329 1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
I0223 22:22:09.634334 1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24]
I0223 22:22:19.648879 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0223 22:22:19.649253 1 main.go:227] handling current node
I0223 22:22:19.649329 1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
I0223 22:22:19.649426 1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24]
I0223 22:22:19.649553 1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
I0223 22:22:19.649592 1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24]
I0223 22:22:29.663056 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0223 22:22:29.663342 1 main.go:227] handling current node
I0223 22:22:29.663589 1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
I0223 22:22:29.663639 1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24]
I0223 22:22:29.663927 1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
I0223 22:22:29.663981 1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24]
*
* ==> kindnet [f6b2b873cba9] <==
* I0223 22:20:08.782335 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0223 22:20:08.782366 1 main.go:227] handling current node
I0223 22:20:08.782378 1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
I0223 22:20:08.782383 1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24]
I0223 22:20:08.782498 1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
I0223 22:20:08.782503 1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.2.0/24]
I0223 22:20:18.789034 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0223 22:20:18.789102 1 main.go:227] handling current node
I0223 22:20:18.789112 1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
I0223 22:20:18.789118 1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24]
I0223 22:20:18.789480 1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
I0223 22:20:18.789490 1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.2.0/24]
I0223 22:20:28.797182 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0223 22:20:28.797218 1 main.go:227] handling current node
I0223 22:20:28.797230 1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
I0223 22:20:28.797238 1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24]
I0223 22:20:28.797428 1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
I0223 22:20:28.797438 1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.2.0/24]
I0223 22:20:38.808257 1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
I0223 22:20:38.808531 1 main.go:227] handling current node
I0223 22:20:38.808612 1 main.go:223] Handling node with IPs: map[192.168.39.102:{}]
I0223 22:20:38.808735 1 main.go:250] Node multinode-773885-m02 has CIDR [10.244.1.0/24]
I0223 22:20:38.808954 1 main.go:223] Handling node with IPs: map[192.168.39.58:{}]
I0223 22:20:38.809162 1 main.go:250] Node multinode-773885-m03 has CIDR [10.244.3.0/24]
I0223 22:20:38.809406 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.58 Flags: [] Table: 0}
*
* ==> kube-apiserver [1f74fa3dd2e7] <==
* I0223 22:21:53.767701 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0223 22:21:53.767780 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
I0223 22:21:53.763570 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0223 22:21:53.767927 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
I0223 22:21:53.807375 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0223 22:21:53.807485 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0223 22:21:53.845960 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0223 22:21:53.860908 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0223 22:21:53.860943 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0223 22:21:53.861339 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0223 22:21:53.865182 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0223 22:21:53.875653 1 cache.go:39] Caches are synced for autoregister controller
I0223 22:21:53.875809 1 shared_informer.go:280] Caches are synced for configmaps
I0223 22:21:53.875948 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0223 22:21:53.875961 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0223 22:21:53.941378 1 shared_informer.go:280] Caches are synced for node_authorizer
I0223 22:21:54.514978 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0223 22:21:54.778557 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0223 22:21:56.611533 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0223 22:21:56.743211 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0223 22:21:56.752344 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0223 22:21:56.816590 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0223 22:21:56.823384 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0223 22:22:06.886425 1 controller.go:615] quota admission added evaluator for: endpoints
I0223 22:22:06.981775 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-apiserver [6a41aad93299] <==
* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0223 22:20:55.126061 1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0223 22:20:55.154966 1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0223 22:20:55.192941 1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
*
* ==> kube-controller-manager [53723346fe3c] <==
* I0223 22:18:04.424086 1 node_lifecycle_controller.go:1231] Controller detected that some Nodes are Ready. Exiting master disruption mode.
W0223 22:18:46.708565 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-773885-m02" does not exist
I0223 22:18:46.720411 1 range_allocator.go:372] Set node multinode-773885-m02 PodCIDR to [10.244.1.0/24]
I0223 22:18:46.740966 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fg44s"
I0223 22:18:46.741018 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5d5vn"
W0223 22:18:49.432085 1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-773885-m02. Assuming now as a timestamp.
I0223 22:18:49.432675 1 event.go:294] "Event occurred" object="multinode-773885-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-773885-m02 event: Registered Node multinode-773885-m02 in Controller"
W0223 22:18:59.747513 1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
I0223 22:19:02.090093 1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
I0223 22:19:02.101165 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-zscjg"
I0223 22:19:02.114911 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-9b7sp"
I0223 22:19:04.450628 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-zscjg" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-zscjg"
W0223 22:19:46.421861 1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
W0223 22:19:46.423059 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-773885-m03" does not exist
I0223 22:19:46.438555 1 range_allocator.go:372] Set node multinode-773885-m03 PodCIDR to [10.244.2.0/24]
I0223 22:19:46.456557 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-psgdt"
I0223 22:19:46.456590 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fbfsf"
I0223 22:19:49.459354 1 event.go:294] "Event occurred" object="multinode-773885-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-773885-m03 event: Registered Node multinode-773885-m03 in Controller"
W0223 22:19:49.460425 1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-773885-m03. Assuming now as a timestamp.
W0223 22:19:59.274458 1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
W0223 22:20:33.012085 1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
W0223 22:20:34.095715 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-773885-m03" does not exist
W0223 22:20:34.096409 1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m02 node
I0223 22:20:34.104228 1 range_allocator.go:372] Set node multinode-773885-m03 PodCIDR to [10.244.3.0/24]
W0223 22:20:42.177970 1 topologycache.go:232] Can't get CPU or zone information for multinode-773885-m03 node
*
* ==> kube-controller-manager [6c70297f9940] <==
* I0223 22:22:06.873909 1 shared_informer.go:280] Caches are synced for ReplicationController
I0223 22:22:06.874261 1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
I0223 22:22:06.874514 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0223 22:22:06.874727 1 shared_informer.go:280] Caches are synced for persistent volume
I0223 22:22:06.874139 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-client
I0223 22:22:06.874151 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-serving
I0223 22:22:06.885778 1 shared_informer.go:280] Caches are synced for namespace
I0223 22:22:06.887045 1 shared_informer.go:280] Caches are synced for node
I0223 22:22:06.887199 1 range_allocator.go:167] Sending events to api server.
I0223 22:22:06.887268 1 range_allocator.go:171] Starting range CIDR allocator
I0223 22:22:06.887457 1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
I0223 22:22:06.887727 1 shared_informer.go:280] Caches are synced for cidrallocator
I0223 22:22:06.894791 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
I0223 22:22:06.902215 1 shared_informer.go:280] Caches are synced for attach detach
I0223 22:22:06.907056 1 shared_informer.go:280] Caches are synced for endpoint_slice
I0223 22:22:06.947594 1 shared_informer.go:280] Caches are synced for ReplicaSet
I0223 22:22:06.985123 1 shared_informer.go:280] Caches are synced for resource quota
I0223 22:22:06.986536 1 shared_informer.go:280] Caches are synced for resource quota
I0223 22:22:07.004087 1 shared_informer.go:280] Caches are synced for crt configmap
I0223 22:22:07.022102 1 shared_informer.go:280] Caches are synced for deployment
I0223 22:22:07.024559 1 shared_informer.go:280] Caches are synced for disruption
I0223 22:22:07.043836 1 shared_informer.go:280] Caches are synced for bootstrap_signer
I0223 22:22:07.418122 1 shared_informer.go:280] Caches are synced for garbage collector
I0223 22:22:07.418162 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0223 22:22:07.423312 1 shared_informer.go:280] Caches are synced for garbage collector
*
* ==> kube-proxy [6becaf5c8640] <==
* I0223 22:17:52.428519 1 node.go:163] Successfully retrieved node IP: 192.168.39.240
I0223 22:17:52.428776 1 server_others.go:109] "Detected node IP" address="192.168.39.240"
I0223 22:17:52.429048 1 server_others.go:535] "Using iptables proxy"
I0223 22:17:52.471955 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0223 22:17:52.472202 1 server_others.go:176] "Using iptables Proxier"
I0223 22:17:52.472334 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0223 22:17:52.472860 1 server.go:655] "Version info" version="v1.26.1"
I0223 22:17:52.473096 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 22:17:52.473898 1 config.go:317] "Starting service config controller"
I0223 22:17:52.474393 1 shared_informer.go:273] Waiting for caches to sync for service config
I0223 22:17:52.474564 1 config.go:226] "Starting endpoint slice config controller"
I0223 22:17:52.474637 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0223 22:17:52.476441 1 config.go:444] "Starting node config controller"
I0223 22:17:52.476591 1 shared_informer.go:273] Waiting for caches to sync for node config
I0223 22:17:52.575596 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0223 22:17:52.575638 1 shared_informer.go:280] Caches are synced for service config
I0223 22:17:52.577063 1 shared_informer.go:280] Caches are synced for node config
*
* ==> kube-proxy [9454f57758e3] <==
* I0223 22:21:55.723163 1 node.go:163] Successfully retrieved node IP: 192.168.39.240
I0223 22:21:55.729131 1 server_others.go:109] "Detected node IP" address="192.168.39.240"
I0223 22:21:55.733751 1 server_others.go:535] "Using iptables proxy"
I0223 22:21:56.081608 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0223 22:21:56.081932 1 server_others.go:176] "Using iptables Proxier"
I0223 22:21:56.083401 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0223 22:21:56.084774 1 server.go:655] "Version info" version="v1.26.1"
I0223 22:21:56.203479 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 22:21:56.205085 1 config.go:317] "Starting service config controller"
I0223 22:21:56.205493 1 shared_informer.go:273] Waiting for caches to sync for service config
I0223 22:21:56.205674 1 config.go:226] "Starting endpoint slice config controller"
I0223 22:21:56.205782 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0223 22:21:56.206845 1 config.go:444] "Starting node config controller"
I0223 22:21:56.208637 1 shared_informer.go:273] Waiting for caches to sync for node config
I0223 22:21:56.348283 1 shared_informer.go:280] Caches are synced for node config
I0223 22:21:56.351314 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0223 22:21:56.363180 1 shared_informer.go:280] Caches are synced for service config
*
* ==> kube-scheduler [baad115b76c6] <==
* W0223 22:17:34.610009 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0223 22:17:34.610030 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0223 22:17:34.611025 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0223 22:17:34.611092 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0223 22:17:34.613999 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0223 22:17:34.614066 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0223 22:17:34.614149 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0223 22:17:34.614173 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0223 22:17:34.614213 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0223 22:17:34.614265 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0223 22:17:35.487184 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0223 22:17:35.487376 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0223 22:17:35.632170 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0223 22:17:35.632547 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0223 22:17:35.721529 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0223 22:17:35.721738 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0223 22:17:35.755180 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0223 22:17:35.755382 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0223 22:17:35.761259 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0223 22:17:35.761432 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0223 22:17:36.073523 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0223 22:17:36.074101 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0223 22:17:38.782901 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0223 22:20:45.176065 1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
E0223 22:20:45.176491 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [efd94ac044a0] <==
* I0223 22:21:51.487920 1 serving.go:348] Generated self-signed cert in-memory
W0223 22:21:53.821119 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0223 22:21:53.821286 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0223 22:21:53.821327 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
W0223 22:21:53.821848 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0223 22:21:53.856843 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
I0223 22:21:53.857373 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 22:21:53.859249 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0223 22:21:53.859546 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0223 22:21:53.860180 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0223 22:21:53.859587 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0223 22:21:53.960971 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Thu 2023-02-23 22:21:24 UTC, ends at Thu 2023-02-23 22:22:38 UTC. --
Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.141211 1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.789777 1292 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.789834 1292 projected.go:198] Error preparing data for projected volume kube-api-access-5k946 for pod default/busybox-6b86dd6d48-9b7sp: object "default"/"kube-root-ca.crt" not registered
Feb 23 22:21:56 multinode-773885 kubelet[1292]: E0223 22:21:56.789892 1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946 podName:7e6550d2-21fc-446e-ba91-4991f379de1c nodeName:}" failed. No retries permitted until 2023-02-23 22:21:58.789875256 +0000 UTC m=+11.061994009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5k946" (UniqueName: "kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946") pod "busybox-6b86dd6d48-9b7sp" (UID: "7e6550d2-21fc-446e-ba91-4991f379de1c") : object "default"/"kube-root-ca.crt" not registered
Feb 23 22:21:57 multinode-773885 kubelet[1292]: E0223 22:21:57.695471 1292 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Feb 23 22:21:57 multinode-773885 kubelet[1292]: E0223 22:21:57.696044 1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume podName:5337fe89-b5a2-4562-84e3-3a7e1f201ff5 nodeName:}" failed. No retries permitted until 2023-02-23 22:22:01.695966879 +0000 UTC m=+13.968085633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume") pod "coredns-787d4945fb-ktr7h" (UID: "5337fe89-b5a2-4562-84e3-3a7e1f201ff5") : object "kube-system"/"coredns" not registered
Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.167577 1292 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
Feb 23 22:21:58 multinode-773885 kubelet[1292]: I0223 22:21:58.564631 1292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e749663c5c7e738a06bd131433cc331bdfe0302f4ed8652dc72907fd84e75f7f"
Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.592064 1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-ktr7h" podUID=5337fe89-b5a2-4562-84e3-3a7e1f201ff5
Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.808766 1292 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.808798 1292 projected.go:198] Error preparing data for projected volume kube-api-access-5k946 for pod default/busybox-6b86dd6d48-9b7sp: object "default"/"kube-root-ca.crt" not registered
Feb 23 22:21:58 multinode-773885 kubelet[1292]: E0223 22:21:58.808843 1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946 podName:7e6550d2-21fc-446e-ba91-4991f379de1c nodeName:}" failed. No retries permitted until 2023-02-23 22:22:02.808830445 +0000 UTC m=+15.080949197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5k946" (UniqueName: "kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946") pod "busybox-6b86dd6d48-9b7sp" (UID: "7e6550d2-21fc-446e-ba91-4991f379de1c") : object "default"/"kube-root-ca.crt" not registered
Feb 23 22:21:59 multinode-773885 kubelet[1292]: E0223 22:21:59.637649 1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
Feb 23 22:22:00 multinode-773885 kubelet[1292]: E0223 22:22:00.141319 1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-ktr7h" podUID=5337fe89-b5a2-4562-84e3-3a7e1f201ff5
Feb 23 22:22:01 multinode-773885 kubelet[1292]: E0223 22:22:01.140900 1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
Feb 23 22:22:01 multinode-773885 kubelet[1292]: E0223 22:22:01.730126 1292 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Feb 23 22:22:01 multinode-773885 kubelet[1292]: E0223 22:22:01.730215 1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume podName:5337fe89-b5a2-4562-84e3-3a7e1f201ff5 nodeName:}" failed. No retries permitted until 2023-02-23 22:22:09.730200815 +0000 UTC m=+22.002319582 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5337fe89-b5a2-4562-84e3-3a7e1f201ff5-config-volume") pod "coredns-787d4945fb-ktr7h" (UID: "5337fe89-b5a2-4562-84e3-3a7e1f201ff5") : object "kube-system"/"coredns" not registered
Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.141217 1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-ktr7h" podUID=5337fe89-b5a2-4562-84e3-3a7e1f201ff5
Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.838248 1292 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.838298 1292 projected.go:198] Error preparing data for projected volume kube-api-access-5k946 for pod default/busybox-6b86dd6d48-9b7sp: object "default"/"kube-root-ca.crt" not registered
Feb 23 22:22:02 multinode-773885 kubelet[1292]: E0223 22:22:02.838347 1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946 podName:7e6550d2-21fc-446e-ba91-4991f379de1c nodeName:}" failed. No retries permitted until 2023-02-23 22:22:10.838331472 +0000 UTC m=+23.110450224 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5k946" (UniqueName: "kubernetes.io/projected/7e6550d2-21fc-446e-ba91-4991f379de1c-kube-api-access-5k946") pod "busybox-6b86dd6d48-9b7sp" (UID: "7e6550d2-21fc-446e-ba91-4991f379de1c") : object "default"/"kube-root-ca.crt" not registered
Feb 23 22:22:03 multinode-773885 kubelet[1292]: E0223 22:22:03.140982 1292 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-9b7sp" podUID=7e6550d2-21fc-446e-ba91-4991f379de1c
Feb 23 22:22:26 multinode-773885 kubelet[1292]: I0223 22:22:26.975727 1292 scope.go:115] "RemoveContainer" containerID="b83daa4cdd8d8298126a07aab8f78401afc75993bca101cbb72ec10217214496"
Feb 23 22:22:26 multinode-773885 kubelet[1292]: I0223 22:22:26.976270 1292 scope.go:115] "RemoveContainer" containerID="27a3e00db0cef9776f9e3172722f98b3c96dbadc1022f977185f1e29d7dbd36a"
Feb 23 22:22:26 multinode-773885 kubelet[1292]: E0223 22:22:26.976460 1292 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(62cc7ef3-a47f-45ce-a9af-cf4de3e1824d)\"" pod="kube-system/storage-provisioner" podUID=62cc7ef3-a47f-45ce-a9af-cf4de3e1824d
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-773885 -n multinode-773885
helpers_test.go:261: (dbg) Run: kubectl --context multinode-773885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (114.14s)