=== RUN TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run: out/minikube-darwin-amd64 node list -p multinode-260000
multinode_test.go:288: (dbg) Run: out/minikube-darwin-amd64 stop -p multinode-260000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-260000: (18.397689872s)
multinode_test.go:293: (dbg) Run: out/minikube-darwin-amd64 start -p multinode-260000 --wait=true -v=8 --alsologtostderr
E0307 10:28:08.338508 3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:29:18.411413 3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:29:36.859048 3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-260000 --wait=true -v=8 --alsologtostderr: exit status 90 (2m56.631240642s)
-- stdout --
* [multinode-260000] minikube v1.29.0 on Darwin 13.2.1
- MINIKUBE_LOCATION=15985
- KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on existing profile
* Starting control plane node multinode-260000 in cluster multinode-260000
* Restarting existing hyperkit VM for "multinode-260000" ...
* Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
* Configuring CNI (Container Networking Interface) ...
* Enabled addons:
* Verifying Kubernetes components...
* Starting worker node multinode-260000-m02 in cluster multinode-260000
* Restarting existing hyperkit VM for "multinode-260000-m02" ...
* Found network options:
- NO_PROXY=192.168.64.12
* Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
- env NO_PROXY=192.168.64.12
* Verifying Kubernetes components...
* Starting worker node multinode-260000-m03 in cluster multinode-260000
* Restarting existing hyperkit VM for "multinode-260000-m03" ...
* Found network options:
- NO_PROXY=192.168.64.12,192.168.64.13
-- /stdout --
** stderr **
I0307 10:27:17.701567 7018 out.go:296] Setting OutFile to fd 1 ...
I0307 10:27:17.701766 7018 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 10:27:17.701771 7018 out.go:309] Setting ErrFile to fd 2...
I0307 10:27:17.701775 7018 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 10:27:17.701881 7018 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
I0307 10:27:17.703156 7018 out.go:303] Setting JSON to false
I0307 10:27:17.723710 7018 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3412,"bootTime":1678210225,"procs":381,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0307 10:27:17.723849 7018 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0307 10:27:17.767920 7018 out.go:177] * [multinode-260000] minikube v1.29.0 on Darwin 13.2.1
I0307 10:27:17.789379 7018 notify.go:220] Checking for updates...
I0307 10:27:17.811044 7018 out.go:177] - MINIKUBE_LOCATION=15985
I0307 10:27:17.832029 7018 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:27:17.853161 7018 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0307 10:27:17.875122 7018 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0307 10:27:17.896016 7018 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
I0307 10:27:17.917197 7018 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0307 10:27:17.939813 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:27:17.939897 7018 driver.go:365] Setting default libvirt URI to qemu:///system
I0307 10:27:17.940536 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:27:17.940612 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:27:17.948145 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51638
I0307 10:27:17.948508 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:27:17.948945 7018 main.go:141] libmachine: Using API Version 1
I0307 10:27:17.948957 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:27:17.949170 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:27:17.949257 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:17.976910 7018 out.go:177] * Using the hyperkit driver based on existing profile
I0307 10:27:18.019030 7018 start.go:296] selected driver: hyperkit
I0307 10:27:18.019085 7018 start.go:857] validating driver "hyperkit" against &{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false
inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP:}
I0307 10:27:18.019304 7018 start.go:868] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0307 10:27:18.019411 7018 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 10:27:18.019612 7018 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15985-3430/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0307 10:27:18.027551 7018 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.29.0
I0307 10:27:18.031921 7018 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:27:18.031941 7018 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0307 10:27:18.034844 7018 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0307 10:27:18.034876 7018 cni.go:84] Creating CNI manager for ""
I0307 10:27:18.034887 7018 cni.go:136] 3 nodes found, recommending kindnet
I0307 10:27:18.034896 7018 start_flags.go:319] config:
{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false
kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 10:27:18.035029 7018 iso.go:125] acquiring lock: {Name:mk7e0ac9e85418e0580033b84b7097185a725e89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 10:27:18.076950 7018 out.go:177] * Starting control plane node multinode-260000 in cluster multinode-260000
I0307 10:27:18.098026 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:27:18.098116 7018 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
I0307 10:27:18.098148 7018 cache.go:57] Caching tarball of preloaded images
I0307 10:27:18.098313 7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0307 10:27:18.098333 7018 cache.go:60] Finished verifying existence of preloaded tar for v1.26.2 on docker
I0307 10:27:18.098530 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:27:18.099358 7018 cache.go:193] Successfully downloaded all kic artifacts
I0307 10:27:18.099407 7018 start.go:364] acquiring machines lock for multinode-260000: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 10:27:18.099512 7018 start.go:368] acquired machines lock for "multinode-260000" in 86.293µs
I0307 10:27:18.099554 7018 start.go:96] Skipping create...Using existing machine configuration
I0307 10:27:18.099566 7018 fix.go:55] fixHost starting:
I0307 10:27:18.100062 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:27:18.100091 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:27:18.107480 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51640
I0307 10:27:18.107803 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:27:18.108127 7018 main.go:141] libmachine: Using API Version 1
I0307 10:27:18.108137 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:27:18.108326 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:27:18.108443 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:18.108543 7018 main.go:141] libmachine: (multinode-260000) Calling .GetState
I0307 10:27:18.108624 7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:27:18.108709 7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid from json: 6235
I0307 10:27:18.109465 7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid 6235 missing from process table
I0307 10:27:18.109498 7018 fix.go:103] recreateIfNeeded on multinode-260000: state=Stopped err=<nil>
I0307 10:27:18.109518 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
W0307 10:27:18.109599 7018 fix.go:129] unexpected machine state, will restart: <nil>
I0307 10:27:18.130859 7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000" ...
I0307 10:27:18.151952 7018 main.go:141] libmachine: (multinode-260000) Calling .Start
I0307 10:27:18.152162 7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:27:18.152193 7018 main.go:141] libmachine: (multinode-260000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid
I0307 10:27:18.153359 7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid 6235 missing from process table
I0307 10:27:18.153369 7018 main.go:141] libmachine: (multinode-260000) DBG | pid 6235 is in state "Stopped"
I0307 10:27:18.153384 7018 main.go:141] libmachine: (multinode-260000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid...
I0307 10:27:18.153520 7018 main.go:141] libmachine: (multinode-260000) DBG | Using UUID 6086a850-bd14-11ed-9c3c-149d997fca88
I0307 10:27:18.261699 7018 main.go:141] libmachine: (multinode-260000) DBG | Generated MAC f2:4e:cd:75:18:a7
I0307 10:27:18.261738 7018 main.go:141] libmachine: (multinode-260000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
I0307 10:27:18.261843 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6086a850-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ecbd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
I0307 10:27:18.261893 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6086a850-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ecbd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
I0307 10:27:18.261955 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6086a850-bd14-11ed-9c3c-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/multinode-260000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage,/Users/jenkins/minikube-integration/1598
5-3430/.minikube/machines/multinode-260000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
I0307 10:27:18.262040 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6086a850-bd14-11ed-9c3c-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/multinode-260000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
I0307 10:27:18.262064 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0307 10:27:18.263449 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Pid is 7033
I0307 10:27:18.263845 7018 main.go:141] libmachine: (multinode-260000) DBG | Attempt 0
I0307 10:27:18.263868 7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:27:18.263948 7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid from json: 7033
I0307 10:27:18.265382 7018 main.go:141] libmachine: (multinode-260000) DBG | Searching for f2:4e:cd:75:18:a7 in /var/db/dhcpd_leases ...
I0307 10:27:18.265430 7018 main.go:141] libmachine: (multinode-260000) DBG | Found 14 entries in /var/db/dhcpd_leases!
I0307 10:27:18.265476 7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
I0307 10:27:18.265490 7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
I0307 10:27:18.265519 7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d194}
I0307 10:27:18.265530 7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d15a}
I0307 10:27:18.265540 7018 main.go:141] libmachine: (multinode-260000) DBG | Found match: f2:4e:cd:75:18:a7
I0307 10:27:18.265548 7018 main.go:141] libmachine: (multinode-260000) DBG | IP: 192.168.64.12
I0307 10:27:18.265590 7018 main.go:141] libmachine: (multinode-260000) Calling .GetConfigRaw
I0307 10:27:18.266196 7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
I0307 10:27:18.266384 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:27:18.266657 7018 machine.go:88] provisioning docker machine ...
I0307 10:27:18.266667 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:18.266773 7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
I0307 10:27:18.266878 7018 buildroot.go:166] provisioning hostname "multinode-260000"
I0307 10:27:18.266892 7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
I0307 10:27:18.266989 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:18.267073 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:18.267172 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:18.267250 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:18.267341 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:18.267461 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:18.267830 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:18.267839 7018 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-260000 && echo "multinode-260000" | sudo tee /etc/hostname
I0307 10:27:18.269902 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0307 10:27:18.319277 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0307 10:27:18.319873 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:27:18.319886 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:27:18.319904 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:27:18.319918 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:27:18.674514 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0307 10:27:18.674532 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0307 10:27:18.778516 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:27:18.778535 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:27:18.778566 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:27:18.778585 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:27:18.779423 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0307 10:27:18.779434 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0307 10:27:23.282731 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0307 10:27:23.282756 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0307 10:27:23.282762 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0307 10:27:53.345501 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000
I0307 10:27:53.345516 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.345641 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:53.345737 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.345814 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.345897 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:53.346017 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:53.346336 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:53.346349 7018 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-260000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000/g' /etc/hosts;
else
echo '127.0.1.1 multinode-260000' | sudo tee -a /etc/hosts;
fi
fi
I0307 10:27:53.408248 7018 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 10:27:53.408267 7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
I0307 10:27:53.408279 7018 buildroot.go:174] setting up certificates
I0307 10:27:53.408288 7018 provision.go:83] configureAuth start
I0307 10:27:53.408298 7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
I0307 10:27:53.408431 7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
I0307 10:27:53.408534 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.408622 7018 provision.go:138] copyHostCerts
I0307 10:27:53.408658 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:27:53.408716 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
I0307 10:27:53.408724 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:27:53.408836 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
I0307 10:27:53.409016 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:27:53.409051 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
I0307 10:27:53.409056 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:27:53.409119 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
I0307 10:27:53.409268 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:27:53.409298 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
I0307 10:27:53.409303 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:27:53.409364 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
I0307 10:27:53.409496 7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000 san=[192.168.64.12 192.168.64.12 localhost 127.0.0.1 minikube multinode-260000]
I0307 10:27:53.471318 7018 provision.go:172] copyRemoteCerts
I0307 10:27:53.471371 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 10:27:53.471386 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.471501 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:53.471590 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.471685 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:53.471784 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:27:53.506343 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0307 10:27:53.506415 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0307 10:27:53.522448 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
I0307 10:27:53.522505 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0307 10:27:53.538178 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0307 10:27:53.538241 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0307 10:27:53.554443 7018 provision.go:86] duration metric: configureAuth took 146.138879ms
I0307 10:27:53.554456 7018 buildroot.go:189] setting minikube options for container-runtime
I0307 10:27:53.554627 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:27:53.554640 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:53.554773 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.554871 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:53.554956 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.555028 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.555105 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:53.555212 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:53.555523 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:53.555532 7018 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0307 10:27:53.611701 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0307 10:27:53.611715 7018 buildroot.go:70] root file system type: tmpfs
I0307 10:27:53.611791 7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0307 10:27:53.611806 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.611930 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:53.612020 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.612103 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.612184 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:53.612317 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:53.612630 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:53.612673 7018 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0307 10:27:53.678288 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0307 10:27:53.678311 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.678443 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:53.678532 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.678617 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.678712 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:53.678844 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:53.679161 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:53.679175 7018 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0307 10:27:54.321619 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0307 10:27:54.321632 7018 machine.go:91] provisioned docker machine in 36.054802092s
I0307 10:27:54.321643 7018 start.go:300] post-start starting for "multinode-260000" (driver="hyperkit")
I0307 10:27:54.321648 7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 10:27:54.321659 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:54.321839 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 10:27:54.321852 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:54.321961 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:54.322042 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:54.322149 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:54.322246 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:27:54.357925 7018 ssh_runner.go:195] Run: cat /etc/os-release
I0307 10:27:54.360302 7018 command_runner.go:130] > NAME=Buildroot
I0307 10:27:54.360311 7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
I0307 10:27:54.360321 7018 command_runner.go:130] > ID=buildroot
I0307 10:27:54.360325 7018 command_runner.go:130] > VERSION_ID=2021.02.12
I0307 10:27:54.360330 7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0307 10:27:54.360498 7018 info.go:137] Remote host: Buildroot 2021.02.12
I0307 10:27:54.360509 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
I0307 10:27:54.360589 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
I0307 10:27:54.360737 7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
I0307 10:27:54.360743 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
I0307 10:27:54.360917 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0307 10:27:54.366509 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
I0307 10:27:54.382252 7018 start.go:303] post-start completed in 60.601074ms
I0307 10:27:54.382265 7018 fix.go:57] fixHost completed within 36.282535453s
I0307 10:27:54.382281 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:54.382411 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:54.382494 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:54.382592 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:54.382687 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:54.382812 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:54.383114 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:54.383122 7018 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0307 10:27:54.438352 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213674.566046378
I0307 10:27:54.438363 7018 fix.go:207] guest clock: 1678213674.566046378
I0307 10:27:54.438368 7018 fix.go:220] Guest: 2023-03-07 10:27:54.566046378 -0800 PST Remote: 2023-03-07 10:27:54.382269 -0800 PST m=+36.717005002 (delta=183.777378ms)
I0307 10:27:54.438390 7018 fix.go:191] guest clock delta is within tolerance: 183.777378ms
I0307 10:27:54.438395 7018 start.go:83] releasing machines lock for "multinode-260000", held for 36.33870613s
I0307 10:27:54.438412 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:54.438533 7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
I0307 10:27:54.438635 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:54.438919 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:54.439021 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:54.439107 7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0307 10:27:54.439131 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:54.439139 7018 ssh_runner.go:195] Run: cat /version.json
I0307 10:27:54.439150 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:54.439230 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:54.439270 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:54.439355 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:54.439367 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:54.439464 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:54.439484 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:54.439556 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:27:54.439569 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:27:54.469202 7018 command_runner.go:130] > {"iso_version": "v1.29.0-1677261626-15923", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "d5f8b7c14d0e3cd88db476786b15ed1c8f7b9a62"}
I0307 10:27:54.469345 7018 ssh_runner.go:195] Run: systemctl --version
I0307 10:27:54.473110 7018 command_runner.go:130] > systemd 247 (247)
I0307 10:27:54.473123 7018 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I0307 10:27:54.510321 7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0307 10:27:54.511264 7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0307 10:27:54.515706 7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0307 10:27:54.515766 7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0307 10:27:54.515808 7018 ssh_runner.go:195] Run: which cri-dockerd
I0307 10:27:54.518180 7018 command_runner.go:130] > /usr/bin/cri-dockerd
I0307 10:27:54.518271 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0307 10:27:54.524837 7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0307 10:27:54.535806 7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 10:27:54.546514 7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0307 10:27:54.546672 7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0307 10:27:54.546690 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:27:54.546786 7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 10:27:54.561856 7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
I0307 10:27:54.561870 7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
I0307 10:27:54.561875 7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
I0307 10:27:54.561879 7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
I0307 10:27:54.561885 7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
I0307 10:27:54.561889 7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0307 10:27:54.561893 7018 command_runner.go:130] > registry.k8s.io/pause:3.9
I0307 10:27:54.561898 7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0307 10:27:54.561902 7018 command_runner.go:130] > registry.k8s.io/pause:3.6
I0307 10:27:54.561906 7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0307 10:27:54.561912 7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0307 10:27:54.562858 7018 docker.go:630] Got preloaded images: -- stdout --
kindest/kindnetd:v20230227-15197099
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0307 10:27:54.562875 7018 docker.go:560] Images already preloaded, skipping extraction
I0307 10:27:54.562881 7018 start.go:485] detecting cgroup driver to use...
I0307 10:27:54.562957 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:27:54.574839 7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0307 10:27:54.574851 7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0307 10:27:54.575174 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0307 10:27:54.582305 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 10:27:54.589279 7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0307 10:27:54.589317 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0307 10:27:54.596289 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:27:54.603219 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 10:27:54.610180 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:27:54.617267 7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 10:27:54.624610 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 10:27:54.631553 7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 10:27:54.637786 7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0307 10:27:54.637952 7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 10:27:54.644168 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:27:54.724435 7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 10:27:54.736384 7018 start.go:485] detecting cgroup driver to use...
I0307 10:27:54.736451 7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0307 10:27:54.745963 7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0307 10:27:54.745979 7018 command_runner.go:130] > [Unit]
I0307 10:27:54.745984 7018 command_runner.go:130] > Description=Docker Application Container Engine
I0307 10:27:54.745988 7018 command_runner.go:130] > Documentation=https://docs.docker.com
I0307 10:27:54.745993 7018 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0307 10:27:54.745999 7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0307 10:27:54.746004 7018 command_runner.go:130] > StartLimitBurst=3
I0307 10:27:54.746007 7018 command_runner.go:130] > StartLimitIntervalSec=60
I0307 10:27:54.746011 7018 command_runner.go:130] > [Service]
I0307 10:27:54.746014 7018 command_runner.go:130] > Type=notify
I0307 10:27:54.746017 7018 command_runner.go:130] > Restart=on-failure
I0307 10:27:54.746024 7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0307 10:27:54.746040 7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0307 10:27:54.746047 7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0307 10:27:54.746053 7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0307 10:27:54.746068 7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0307 10:27:54.746075 7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0307 10:27:54.746081 7018 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0307 10:27:54.746090 7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0307 10:27:54.746099 7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0307 10:27:54.746104 7018 command_runner.go:130] > ExecStart=
I0307 10:27:54.746114 7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
I0307 10:27:54.746119 7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0307 10:27:54.746130 7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0307 10:27:54.746136 7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0307 10:27:54.746140 7018 command_runner.go:130] > LimitNOFILE=infinity
I0307 10:27:54.746143 7018 command_runner.go:130] > LimitNPROC=infinity
I0307 10:27:54.746147 7018 command_runner.go:130] > LimitCORE=infinity
I0307 10:27:54.746156 7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0307 10:27:54.746161 7018 command_runner.go:130] > # Only systemd 226 and above support this version.
I0307 10:27:54.746165 7018 command_runner.go:130] > TasksMax=infinity
I0307 10:27:54.746168 7018 command_runner.go:130] > TimeoutStartSec=0
I0307 10:27:54.746173 7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0307 10:27:54.746179 7018 command_runner.go:130] > Delegate=yes
I0307 10:27:54.746184 7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0307 10:27:54.746188 7018 command_runner.go:130] > KillMode=process
I0307 10:27:54.746191 7018 command_runner.go:130] > [Install]
I0307 10:27:54.746201 7018 command_runner.go:130] > WantedBy=multi-user.target
I0307 10:27:54.746263 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:27:54.754873 7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0307 10:27:54.766931 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:27:54.775320 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:27:54.784274 7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0307 10:27:54.810077 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:27:54.819002 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:27:54.830417 7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:27:54.830427 7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:27:54.830775 7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0307 10:27:54.910530 7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0307 10:27:54.991106 7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0307 10:27:54.991125 7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0307 10:27:55.002612 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:27:55.082706 7018 ssh_runner.go:195] Run: sudo systemctl restart docker
I0307 10:27:56.344251 7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.261521172s)
I0307 10:27:56.344319 7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 10:27:56.427984 7018 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0307 10:27:56.518324 7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 10:27:56.611821 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:27:56.699165 7018 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0307 10:27:56.710403 7018 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0307 10:27:56.710477 7018 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0307 10:27:56.714055 7018 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0307 10:27:56.714067 7018 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0307 10:27:56.714072 7018 command_runner.go:130] > Device: 16h/22d Inode: 853 Links: 1
I0307 10:27:56.714079 7018 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0307 10:27:56.714098 7018 command_runner.go:130] > Access: 2023-03-07 18:27:56.836416904 +0000
I0307 10:27:56.714105 7018 command_runner.go:130] > Modify: 2023-03-07 18:27:56.836416904 +0000
I0307 10:27:56.714109 7018 command_runner.go:130] > Change: 2023-03-07 18:27:56.838416903 +0000
I0307 10:27:56.714113 7018 command_runner.go:130] > Birth: -
I0307 10:27:56.714136 7018 start.go:553] Will wait 60s for crictl version
I0307 10:27:56.714180 7018 ssh_runner.go:195] Run: which crictl
I0307 10:27:56.716256 7018 command_runner.go:130] > /usr/bin/crictl
I0307 10:27:56.716479 7018 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0307 10:27:56.782605 7018 command_runner.go:130] > Version: 0.1.0
I0307 10:27:56.782630 7018 command_runner.go:130] > RuntimeName: docker
I0307 10:27:56.782659 7018 command_runner.go:130] > RuntimeVersion: 20.10.23
I0307 10:27:56.782788 7018 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0307 10:27:56.786182 7018 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0307 10:27:56.786249 7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 10:27:56.806368 7018 command_runner.go:130] > 20.10.23
I0307 10:27:56.807205 7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 10:27:56.827016 7018 command_runner.go:130] > 20.10.23
I0307 10:27:56.870119 7018 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
I0307 10:27:56.870166 7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
I0307 10:27:56.870574 7018 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0307 10:27:56.874782 7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 10:27:56.882699 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:27:56.882759 7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 10:27:56.898148 7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
I0307 10:27:56.898160 7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
I0307 10:27:56.898164 7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
I0307 10:27:56.898169 7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
I0307 10:27:56.898172 7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
I0307 10:27:56.898176 7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0307 10:27:56.898180 7018 command_runner.go:130] > registry.k8s.io/pause:3.9
I0307 10:27:56.898184 7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0307 10:27:56.898188 7018 command_runner.go:130] > registry.k8s.io/pause:3.6
I0307 10:27:56.898197 7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0307 10:27:56.898202 7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0307 10:27:56.898858 7018 docker.go:630] Got preloaded images: -- stdout --
kindest/kindnetd:v20230227-15197099
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0307 10:27:56.898867 7018 docker.go:560] Images already preloaded, skipping extraction
I0307 10:27:56.898945 7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 10:27:56.913839 7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
I0307 10:27:56.913851 7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
I0307 10:27:56.913855 7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
I0307 10:27:56.913869 7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
I0307 10:27:56.913873 7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
I0307 10:27:56.913877 7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0307 10:27:56.913881 7018 command_runner.go:130] > registry.k8s.io/pause:3.9
I0307 10:27:56.913885 7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0307 10:27:56.913889 7018 command_runner.go:130] > registry.k8s.io/pause:3.6
I0307 10:27:56.913893 7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0307 10:27:56.913900 7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0307 10:27:56.914547 7018 docker.go:630] Got preloaded images: -- stdout --
kindest/kindnetd:v20230227-15197099
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0307 10:27:56.914562 7018 cache_images.go:84] Images are preloaded, skipping loading
I0307 10:27:56.914636 7018 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0307 10:27:56.935563 7018 command_runner.go:130] > cgroupfs
I0307 10:27:56.936272 7018 cni.go:84] Creating CNI manager for ""
I0307 10:27:56.936282 7018 cni.go:136] 3 nodes found, recommending kindnet
I0307 10:27:56.936296 7018 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0307 10:27:56.936310 7018 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.12 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-260000 NodeName:multinode-260000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0307 10:27:56.936405 7018 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.12
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-260000"
kubeletExtraArgs:
node-ip: 192.168.64.12
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.12"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0307 10:27:56.936460 7018 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-260000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.12
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0307 10:27:56.936536 7018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0307 10:27:56.943109 7018 command_runner.go:130] > kubeadm
I0307 10:27:56.943116 7018 command_runner.go:130] > kubectl
I0307 10:27:56.943120 7018 command_runner.go:130] > kubelet
I0307 10:27:56.943263 7018 binaries.go:44] Found k8s binaries, skipping transfer
I0307 10:27:56.943308 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0307 10:27:56.949592 7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (449 bytes)
I0307 10:27:56.960366 7018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0307 10:27:56.970938 7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
I0307 10:27:56.982338 7018 ssh_runner.go:195] Run: grep 192.168.64.12 control-plane.minikube.internal$ /etc/hosts
I0307 10:27:56.984586 7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.12 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 10:27:56.991939 7018 certs.go:56] Setting up /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000 for IP: 192.168.64.12
I0307 10:27:56.991953 7018 certs.go:186] acquiring lock for shared ca certs: {Name:mk21aa92235e3b083ba3cf4a52527e5734aca22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 10:27:56.992091 7018 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key
I0307 10:27:56.992154 7018 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key
I0307 10:27:56.992245 7018 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key
I0307 10:27:56.992309 7018 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key.546ed142
I0307 10:27:56.992376 7018 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key
I0307 10:27:56.992385 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0307 10:27:56.992414 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0307 10:27:56.992439 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0307 10:27:56.992461 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0307 10:27:56.992479 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0307 10:27:56.992497 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0307 10:27:56.992518 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0307 10:27:56.992536 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0307 10:27:56.992623 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem (1338 bytes)
W0307 10:27:56.992661 7018 certs.go:397] ignoring /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903_empty.pem, impossibly tiny 0 bytes
I0307 10:27:56.992672 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem (1675 bytes)
I0307 10:27:56.992706 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem (1082 bytes)
I0307 10:27:56.992736 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem (1123 bytes)
I0307 10:27:56.992769 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem (1675 bytes)
I0307 10:27:56.992838 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem (1708 bytes)
I0307 10:27:56.992873 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0307 10:27:56.992892 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem -> /usr/share/ca-certificates/3903.pem
I0307 10:27:56.992913 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /usr/share/ca-certificates/39032.pem
I0307 10:27:56.993367 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0307 10:27:57.008967 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0307 10:27:57.024057 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0307 10:27:57.039253 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0307 10:27:57.054424 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0307 10:27:57.069714 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0307 10:27:57.085285 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0307 10:27:57.100487 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0307 10:27:57.116166 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0307 10:27:57.131487 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem --> /usr/share/ca-certificates/3903.pem (1338 bytes)
I0307 10:27:57.146782 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /usr/share/ca-certificates/39032.pem (1708 bytes)
I0307 10:27:57.161670 7018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0307 10:27:57.172684 7018 ssh_runner.go:195] Run: openssl version
I0307 10:27:57.175822 7018 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0307 10:27:57.176031 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/39032.pem && ln -fs /usr/share/ca-certificates/39032.pem /etc/ssl/certs/39032.pem"
I0307 10:27:57.182397 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/39032.pem
I0307 10:27:57.185195 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/39032.pem
I0307 10:27:57.185263 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/39032.pem
I0307 10:27:57.185306 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/39032.pem
I0307 10:27:57.188613 7018 command_runner.go:130] > 3ec20f2e
I0307 10:27:57.188881 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/39032.pem /etc/ssl/certs/3ec20f2e.0"
I0307 10:27:57.195955 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0307 10:27:57.203206 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0307 10:27:57.205892 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 10:27:57.206086 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 10:27:57.206121 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0307 10:27:57.209355 7018 command_runner.go:130] > b5213941
I0307 10:27:57.209587 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0307 10:27:57.216626 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3903.pem && ln -fs /usr/share/ca-certificates/3903.pem /etc/ssl/certs/3903.pem"
I0307 10:27:57.223521 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3903.pem
I0307 10:27:57.226194 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/3903.pem
I0307 10:27:57.226381 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/3903.pem
I0307 10:27:57.226417 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3903.pem
I0307 10:27:57.229589 7018 command_runner.go:130] > 51391683
I0307 10:27:57.229807 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3903.pem /etc/ssl/certs/51391683.0"
I0307 10:27:57.236882 7018 kubeadm.go:401] StartCluster: {Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false in
gress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP:}
I0307 10:27:57.236992 7018 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0307 10:27:57.252692 7018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0307 10:27:57.259210 7018 command_runner.go:130] > /var/lib/kubelet/config.yaml
I0307 10:27:57.259222 7018 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
I0307 10:27:57.259230 7018 command_runner.go:130] > /var/lib/minikube/etcd:
I0307 10:27:57.259234 7018 command_runner.go:130] > member
I0307 10:27:57.259381 7018 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0307 10:27:57.259400 7018 kubeadm.go:633] restartCluster start
I0307 10:27:57.259443 7018 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0307 10:27:57.266382 7018 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0307 10:27:57.266677 7018 kubeconfig.go:135] verify returned: extract IP: "multinode-260000" does not appear in /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:27:57.266753 7018 kubeconfig.go:146] "multinode-260000" context is missing from /Users/jenkins/minikube-integration/15985-3430/kubeconfig - will repair!
I0307 10:27:57.266945 7018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15985-3430/kubeconfig: {Name:mkea569ea3041d84fd3aeaa788f308c9891aa7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 10:27:57.267393 7018 loader.go:373] Config loaded from file: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:27:57.267600 7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 10:27:57.268098 7018 cert_rotation.go:137] Starting client certificate rotation controller
I0307 10:27:57.268266 7018 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0307 10:27:57.274410 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:57.274450 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:57.282537 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:27:57.783579 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:57.783768 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:57.794313 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:27:58.283596 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:58.283730 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:58.294644 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:27:58.782684 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:58.782873 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:58.793430 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:27:59.283543 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:59.283649 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:59.294225 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:27:59.782887 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:59.783019 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:59.793607 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:00.282689 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:00.282922 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:00.292782 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:00.784107 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:00.784212 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:00.794376 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:01.283293 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:01.283433 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:01.293684 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:01.783681 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:01.783913 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:01.794869 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:02.283942 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:02.284074 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:02.294517 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:02.782945 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:02.783113 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:02.794006 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:03.284588 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:03.284777 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:03.294981 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:03.783910 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:03.784171 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:03.795492 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:04.283913 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:04.284104 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:04.294550 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:04.784723 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:04.784921 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:04.795506 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:05.284742 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:05.284884 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:05.294924 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:05.784725 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:05.784834 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:05.795470 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:06.284719 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:06.284873 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:06.295722 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:06.784533 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:06.784754 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:06.795131 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:07.284699 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:07.287011 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:07.296334 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:07.296343 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:07.296382 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:07.304816 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:07.304829 7018 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0307 10:28:07.304833 7018 kubeadm.go:1120] stopping kube-system containers ...
I0307 10:28:07.304891 7018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0307 10:28:07.321379 7018 command_runner.go:130] > da06b08e5617
I0307 10:28:07.321390 7018 command_runner.go:130] > c4559ff3518d
I0307 10:28:07.321394 7018 command_runner.go:130] > 5b66601ca9d1
I0307 10:28:07.321398 7018 command_runner.go:130] > 0ace7c6cf637
I0307 10:28:07.321401 7018 command_runner.go:130] > 37e6cf092e1c
I0307 10:28:07.321411 7018 command_runner.go:130] > ae9d394ad7a7
I0307 10:28:07.321416 7018 command_runner.go:130] > 808d83da8d84
I0307 10:28:07.321423 7018 command_runner.go:130] > 1bf0ab9eb4c5
I0307 10:28:07.321426 7018 command_runner.go:130] > 2243964fbc4d
I0307 10:28:07.321432 7018 command_runner.go:130] > 3b27eb7db4c2
I0307 10:28:07.321436 7018 command_runner.go:130] > 10d167b9d987
I0307 10:28:07.321440 7018 command_runner.go:130] > 6ac51e9516a2
I0307 10:28:07.321443 7018 command_runner.go:130] > 3e9b5dec9e21
I0307 10:28:07.321448 7018 command_runner.go:130] > 0721a87b433b
I0307 10:28:07.321452 7018 command_runner.go:130] > aef4edf5b492
I0307 10:28:07.321456 7018 command_runner.go:130] > cfcf920b7378
I0307 10:28:07.322130 7018 docker.go:456] Stopping containers: [da06b08e5617 c4559ff3518d 5b66601ca9d1 0ace7c6cf637 37e6cf092e1c ae9d394ad7a7 808d83da8d84 1bf0ab9eb4c5 2243964fbc4d 3b27eb7db4c2 10d167b9d987 6ac51e9516a2 3e9b5dec9e21 0721a87b433b aef4edf5b492 cfcf920b7378]
I0307 10:28:07.322197 7018 ssh_runner.go:195] Run: docker stop da06b08e5617 c4559ff3518d 5b66601ca9d1 0ace7c6cf637 37e6cf092e1c ae9d394ad7a7 808d83da8d84 1bf0ab9eb4c5 2243964fbc4d 3b27eb7db4c2 10d167b9d987 6ac51e9516a2 3e9b5dec9e21 0721a87b433b aef4edf5b492 cfcf920b7378
I0307 10:28:07.338863 7018 command_runner.go:130] > da06b08e5617
I0307 10:28:07.338874 7018 command_runner.go:130] > c4559ff3518d
I0307 10:28:07.339268 7018 command_runner.go:130] > 5b66601ca9d1
I0307 10:28:07.339476 7018 command_runner.go:130] > 0ace7c6cf637
I0307 10:28:07.339531 7018 command_runner.go:130] > 37e6cf092e1c
I0307 10:28:07.339608 7018 command_runner.go:130] > ae9d394ad7a7
I0307 10:28:07.339615 7018 command_runner.go:130] > 808d83da8d84
I0307 10:28:07.339735 7018 command_runner.go:130] > 1bf0ab9eb4c5
I0307 10:28:07.339806 7018 command_runner.go:130] > 2243964fbc4d
I0307 10:28:07.339952 7018 command_runner.go:130] > 3b27eb7db4c2
I0307 10:28:07.340042 7018 command_runner.go:130] > 10d167b9d987
I0307 10:28:07.340172 7018 command_runner.go:130] > 6ac51e9516a2
I0307 10:28:07.340231 7018 command_runner.go:130] > 3e9b5dec9e21
I0307 10:28:07.340237 7018 command_runner.go:130] > 0721a87b433b
I0307 10:28:07.340416 7018 command_runner.go:130] > aef4edf5b492
I0307 10:28:07.340541 7018 command_runner.go:130] > cfcf920b7378
I0307 10:28:07.341444 7018 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0307 10:28:07.352567 7018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0307 10:28:07.358762 7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0307 10:28:07.358772 7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0307 10:28:07.358778 7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0307 10:28:07.358784 7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0307 10:28:07.358923 7018 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0307 10:28:07.358971 7018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0307 10:28:07.365297 7018 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0307 10:28:07.365309 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:07.435009 7018 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0307 10:28:07.435021 7018 command_runner.go:130] > [certs] Using existing ca certificate authority
I0307 10:28:07.435026 7018 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0307 10:28:07.435249 7018 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0307 10:28:07.435474 7018 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
I0307 10:28:07.435692 7018 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
I0307 10:28:07.436004 7018 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
I0307 10:28:07.436233 7018 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
I0307 10:28:07.436509 7018 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
I0307 10:28:07.436724 7018 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0307 10:28:07.436961 7018 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I0307 10:28:07.437121 7018 command_runner.go:130] > [certs] Using the existing "sa" key
I0307 10:28:07.438004 7018 command_runner.go:130] ! W0307 18:28:07.567847 1206 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:07.438020 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:07.477158 7018 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0307 10:28:07.530979 7018 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0307 10:28:07.671495 7018 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0307 10:28:07.806243 7018 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0307 10:28:08.012059 7018 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0307 10:28:08.013940 7018 command_runner.go:130] ! W0307 18:28:07.610432 1212 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:08.013962 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:08.064445 7018 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0307 10:28:08.064458 7018 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0307 10:28:08.064462 7018 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0307 10:28:08.158176 7018 command_runner.go:130] ! W0307 18:28:08.188188 1218 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:08.158212 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:08.205939 7018 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0307 10:28:08.205952 7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0307 10:28:08.207362 7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0307 10:28:08.208239 7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0307 10:28:08.211123 7018 command_runner.go:130] ! W0307 18:28:08.337529 1240 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:08.211182 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:08.268874 7018 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0307 10:28:08.276469 7018 command_runner.go:130] ! W0307 18:28:08.400815 1250 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:08.276569 7018 api_server.go:51] waiting for apiserver process to appear ...
I0307 10:28:08.276628 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:08.791796 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:09.291418 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:09.790079 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:10.289945 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:10.300303 7018 command_runner.go:130] > 1604
I0307 10:28:10.300322 7018 api_server.go:71] duration metric: took 2.023748028s to wait for apiserver process to appear ...
I0307 10:28:10.300332 7018 api_server.go:87] waiting for apiserver healthz status ...
I0307 10:28:10.300340 7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
I0307 10:28:13.002874 7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0307 10:28:13.002891 7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0307 10:28:13.505043 7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
I0307 10:28:13.511549 7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0307 10:28:13.511564 7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0307 10:28:14.003030 7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
I0307 10:28:14.007459 7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0307 10:28:14.007479 7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0307 10:28:14.504449 7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
I0307 10:28:14.508376 7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 200:
ok
I0307 10:28:14.508433 7018 round_trippers.go:463] GET https://192.168.64.12:8443/version
I0307 10:28:14.508438 7018 round_trippers.go:469] Request Headers:
I0307 10:28:14.508446 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:14.508452 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:14.516136 7018 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0307 10:28:14.516148 7018 round_trippers.go:577] Response Headers:
I0307 10:28:14.516154 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:14.516158 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:14.516163 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:14.516168 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:14.516173 7018 round_trippers.go:580] Content-Length: 263
I0307 10:28:14.516178 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:14 GMT
I0307 10:28:14.516185 7018 round_trippers.go:580] Audit-Id: 364007ce-aca2-49dd-9978-704f40503cf3
I0307 10:28:14.516202 7018 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.2",
"gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
"gitTreeState": "clean",
"buildDate": "2023-02-22T13:32:22Z",
"goVersion": "go1.19.6",
"compiler": "gc",
"platform": "linux/amd64"
}
I0307 10:28:14.516246 7018 api_server.go:140] control plane version: v1.26.2
I0307 10:28:14.516254 7018 api_server.go:130] duration metric: took 4.215899257s to wait for apiserver health ...
I0307 10:28:14.516265 7018 cni.go:84] Creating CNI manager for ""
I0307 10:28:14.516271 7018 cni.go:136] 3 nodes found, recommending kindnet
I0307 10:28:14.538513 7018 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0307 10:28:14.558703 7018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0307 10:28:14.565010 7018 command_runner.go:130] > File: /opt/cni/bin/portmap
I0307 10:28:14.565023 7018 command_runner.go:130] > Size: 2798344 Blocks: 5472 IO Block: 4096 regular file
I0307 10:28:14.565030 7018 command_runner.go:130] > Device: 11h/17d Inode: 3542 Links: 1
I0307 10:28:14.565035 7018 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0307 10:28:14.565040 7018 command_runner.go:130] > Access: 2023-03-07 18:27:25.800133630 +0000
I0307 10:28:14.565044 7018 command_runner.go:130] > Modify: 2023-02-24 23:58:49.000000000 +0000
I0307 10:28:14.565049 7018 command_runner.go:130] > Change: 2023-03-07 18:27:24.520133706 +0000
I0307 10:28:14.565052 7018 command_runner.go:130] > Birth: -
I0307 10:28:14.565080 7018 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
I0307 10:28:14.565086 7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0307 10:28:14.614484 7018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0307 10:28:15.463255 7018 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0307 10:28:15.465520 7018 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0307 10:28:15.467209 7018 command_runner.go:130] > serviceaccount/kindnet unchanged
I0307 10:28:15.486465 7018 command_runner.go:130] > daemonset.apps/kindnet configured
I0307 10:28:15.487964 7018 system_pods.go:43] waiting for kube-system pods to appear ...
I0307 10:28:15.488018 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:15.488023 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.488030 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.488035 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.490928 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:15.490936 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.490945 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.490952 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.490959 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.490966 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.490971 7018 round_trippers.go:580] Audit-Id: fbf2e35b-55b7-466f-9275-31e56ce04183
I0307 10:28:15.490978 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.492557 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1032"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81648 chars]
I0307 10:28:15.495381 7018 system_pods.go:59] 12 kube-system pods found
I0307 10:28:15.495395 7018 system_pods.go:61] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
I0307 10:28:15.495400 7018 system_pods.go:61] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
I0307 10:28:15.495403 7018 system_pods.go:61] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
I0307 10:28:15.495407 7018 system_pods.go:61] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
I0307 10:28:15.495411 7018 system_pods.go:61] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
I0307 10:28:15.495415 7018 system_pods.go:61] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
I0307 10:28:15.495421 7018 system_pods.go:61] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0307 10:28:15.495425 7018 system_pods.go:61] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
I0307 10:28:15.495429 7018 system_pods.go:61] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
I0307 10:28:15.495433 7018 system_pods.go:61] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
I0307 10:28:15.495437 7018 system_pods.go:61] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
I0307 10:28:15.495441 7018 system_pods.go:61] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
I0307 10:28:15.495444 7018 system_pods.go:74] duration metric: took 7.473493ms to wait for pod list to return data ...
I0307 10:28:15.495451 7018 node_conditions.go:102] verifying NodePressure condition ...
I0307 10:28:15.495484 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
I0307 10:28:15.495488 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.495494 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.495499 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.497193 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.497203 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.497209 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.497215 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.497225 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.497237 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.497246 7018 round_trippers.go:580] Audit-Id: 87494186-1238-43d5-866d-3fb8cf3ac670
I0307 10:28:15.497252 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.497439 7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1032"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16457 chars]
I0307 10:28:15.497964 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:15.497980 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:15.497991 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:15.497994 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:15.497998 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:15.498001 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:15.498005 7018 node_conditions.go:105] duration metric: took 2.549988ms to run NodePressure ...
I0307 10:28:15.498014 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:15.613921 7018 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0307 10:28:15.647095 7018 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0307 10:28:15.648104 7018 command_runner.go:130] ! W0307 18:28:15.688091 2114 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:15.648194 7018 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0307 10:28:15.648246 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
I0307 10:28:15.648251 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.648257 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.648262 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.650635 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:15.650643 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.650648 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.650653 7018 round_trippers.go:580] Audit-Id: cb509b59-97eb-4381-8070-69cc8abdab39
I0307 10:28:15.650664 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.650670 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.650675 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.650683 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.651119 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1034"},"items":[{"metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"288","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28366 chars]
I0307 10:28:15.651785 7018 kubeadm.go:784] kubelet initialised
I0307 10:28:15.651796 7018 kubeadm.go:785] duration metric: took 3.59091ms waiting for restarted kubelet to initialise ...
I0307 10:28:15.651802 7018 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:15.651829 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:15.651834 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.651840 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.651856 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.654797 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:15.654807 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.654812 7018 round_trippers.go:580] Audit-Id: a9d90e98-0ed7-4ce3-b64a-cc82a3347b6f
I0307 10:28:15.654817 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.654823 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.654828 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.654832 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.654837 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.656020 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1034"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81648 chars]
I0307 10:28:15.657761 7018 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
I0307 10:28:15.657793 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:15.657798 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.657805 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.657811 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.659065 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.659077 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.659085 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.659092 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.659098 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.659104 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.659109 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.659115 7018 round_trippers.go:580] Audit-Id: eb2db07a-7079-4adb-a12f-c3919e2af0f0
I0307 10:28:15.659276 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6281 chars]
I0307 10:28:15.659508 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:15.659514 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.659520 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.659526 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.660689 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.660696 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.660701 7018 round_trippers.go:580] Audit-Id: 4dd3efdc-1609-4f2d-9ae0-4a842093d527
I0307 10:28:15.660706 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.660711 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.660717 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.660724 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.660734 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.660828 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:15.660996 7018 pod_ready.go:97] node "multinode-260000" hosting pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.661003 7018 pod_ready.go:81] duration metric: took 3.233228ms waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
E0307 10:28:15.661009 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.661014 7018 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:15.661036 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
I0307 10:28:15.661040 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.661046 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.661051 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.662218 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.662226 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.662232 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.662238 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.662244 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.662249 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.662254 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.662258 7018 round_trippers.go:580] Audit-Id: eeb6ea95-4efc-44d3-86d7-f3e9abc4f441
I0307 10:28:15.662373 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"288","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5846 chars]
I0307 10:28:15.662566 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:15.662572 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.662578 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.662586 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.663695 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.663702 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.663708 7018 round_trippers.go:580] Audit-Id: 0c08723d-f6d6-4c3f-bc19-ce14073bddc8
I0307 10:28:15.663713 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.663718 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.663724 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.663728 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.663733 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.663841 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:15.664005 7018 pod_ready.go:97] node "multinode-260000" hosting pod "etcd-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.664012 7018 pod_ready.go:81] duration metric: took 2.993408ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
E0307 10:28:15.664024 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "etcd-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.664031 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:15.664054 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
I0307 10:28:15.664059 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.664064 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.664070 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.665133 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.665140 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.665145 7018 round_trippers.go:580] Audit-Id: d8155bb7-ed68-40c6-a807-4b433cb29ded
I0307 10:28:15.665164 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.665181 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.665188 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.665193 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.665199 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.665314 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"269","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7383 chars]
I0307 10:28:15.665528 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:15.665534 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.665540 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.665546 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.666728 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.666735 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.666743 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.666752 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.666761 7018 round_trippers.go:580] Audit-Id: 90f98c95-77ef-4f41-8b0d-68655aa67aef
I0307 10:28:15.666768 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.666773 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.666778 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.666842 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:15.667008 7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-apiserver-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.667016 7018 pod_ready.go:81] duration metric: took 2.97888ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
E0307 10:28:15.667021 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-apiserver-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.667025 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:15.688093 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
I0307 10:28:15.688109 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.688116 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.688121 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.689605 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.689619 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.689626 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.689631 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.689636 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.689642 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.689649 7018 round_trippers.go:580] Audit-Id: 30247593-c3f9-4f0b-8ec3-84987c2d98e7
I0307 10:28:15.689656 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.689775 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1031","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7421 chars]
I0307 10:28:15.888328 7018 request.go:622] Waited for 198.258292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:15.888357 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:15.888362 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.888370 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.888378 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.890719 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:15.890732 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.890738 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.890742 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.890748 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.890753 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:16 GMT
I0307 10:28:15.890757 7018 round_trippers.go:580] Audit-Id: 2c7858e8-abf5-4b14-91d6-55537d022b63
I0307 10:28:15.890762 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.890832 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:15.891019 7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-controller-manager-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.891027 7018 pod_ready.go:81] duration metric: took 223.996649ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
E0307 10:28:15.891033 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-controller-manager-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.891041 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
I0307 10:28:16.088078 7018 request.go:622] Waited for 197.006181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
I0307 10:28:16.088110 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
I0307 10:28:16.088145 7018 round_trippers.go:469] Request Headers:
I0307 10:28:16.088152 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:16.088171 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:16.090139 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:16.090148 7018 round_trippers.go:577] Response Headers:
I0307 10:28:16.090153 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:16 GMT
I0307 10:28:16.090158 7018 round_trippers.go:580] Audit-Id: 33bdce0d-afd5-41b3-be54-1778f67df277
I0307 10:28:16.090163 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:16.090168 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:16.090174 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:16.090180 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:16.090265 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"359","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
I0307 10:28:16.289549 7018 request.go:622] Waited for 199.030503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:16.289608 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:16.289613 7018 round_trippers.go:469] Request Headers:
I0307 10:28:16.289619 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:16.289625 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:16.291464 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:16.291474 7018 round_trippers.go:577] Response Headers:
I0307 10:28:16.291480 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:16.291486 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:16.291491 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:16.291497 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:16 GMT
I0307 10:28:16.291502 7018 round_trippers.go:580] Audit-Id: 304d1604-8237-4817-97b8-2398828df2aa
I0307 10:28:16.291512 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:16.291606 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:16.291814 7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-proxy-8qwhq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:16.291823 7018 pod_ready.go:81] duration metric: took 400.77463ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
E0307 10:28:16.291829 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-proxy-8qwhq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:16.291845 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:16.488974 7018 request.go:622] Waited for 197.089772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:16.489010 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:16.489014 7018 round_trippers.go:469] Request Headers:
I0307 10:28:16.489021 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:16.489028 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:16.490668 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:16.490678 7018 round_trippers.go:577] Response Headers:
I0307 10:28:16.490684 7018 round_trippers.go:580] Audit-Id: f7cf2cf1-fe75-45fb-b387-3c47e4ca38bf
I0307 10:28:16.490689 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:16.490695 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:16.490699 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:16.490705 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:16.490710 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:16 GMT
I0307 10:28:16.490783 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"469","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
I0307 10:28:16.688164 7018 request.go:622] Waited for 197.086665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:16.688201 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:16.688207 7018 round_trippers.go:469] Request Headers:
I0307 10:28:16.688216 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:16.688224 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:16.690320 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:16.690331 7018 round_trippers.go:577] Response Headers:
I0307 10:28:16.690337 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:16.690347 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:16.690354 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:16.690360 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:16 GMT
I0307 10:28:16.690365 7018 round_trippers.go:580] Audit-Id: fafa8c79-056c-4482-a7d3-9af678647000
I0307 10:28:16.690370 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:16.690435 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b","resourceVersion":"812","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4512 chars]
I0307 10:28:16.690610 7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:16.690616 7018 pod_ready.go:81] duration metric: took 398.761593ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:16.690622 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:16.888997 7018 request.go:622] Waited for 198.34143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:16.889083 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:16.889091 7018 round_trippers.go:469] Request Headers:
I0307 10:28:16.889099 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:16.889107 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:16.890960 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:16.890976 7018 round_trippers.go:577] Response Headers:
I0307 10:28:16.890988 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:16.890997 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:16.891006 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:16.891013 7018 round_trippers.go:580] Audit-Id: 2a6b83fb-355a-47d1-a5fb-041011c34ce5
I0307 10:28:16.891021 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:16.891029 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:16.891126 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I0307 10:28:17.089042 7018 request.go:622] Waited for 197.667165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:17.089099 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:17.089104 7018 round_trippers.go:469] Request Headers:
I0307 10:28:17.089110 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:17.089123 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:17.092228 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:17.092240 7018 round_trippers.go:577] Response Headers:
I0307 10:28:17.092249 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:17.092256 7018 round_trippers.go:580] Audit-Id: 4d8ae72e-fdde-4d59-9a71-91d0c3ee68a0
I0307 10:28:17.092264 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:17.092271 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:17.092276 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:17.092282 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:17.092354 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1019","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4450 chars]
I0307 10:28:17.092536 7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:17.092542 7018 pod_ready.go:81] duration metric: took 401.914192ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:17.092550 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:17.289090 7018 request.go:622] Waited for 196.506508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:17.289121 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:17.289126 7018 round_trippers.go:469] Request Headers:
I0307 10:28:17.289133 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:17.289140 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:17.290898 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:17.290909 7018 round_trippers.go:577] Response Headers:
I0307 10:28:17.290915 7018 round_trippers.go:580] Audit-Id: 9fb63a2b-6315-4a56-8919-8e3ff05df64c
I0307 10:28:17.290920 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:17.290926 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:17.290932 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:17.290936 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:17.290941 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:17.291122 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1035","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5133 chars]
I0307 10:28:17.488710 7018 request.go:622] Waited for 197.357013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:17.488741 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:17.488773 7018 round_trippers.go:469] Request Headers:
I0307 10:28:17.488780 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:17.488786 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:17.492401 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:17.492411 7018 round_trippers.go:577] Response Headers:
I0307 10:28:17.492417 7018 round_trippers.go:580] Audit-Id: 8a48812e-9efb-405d-92a7-d9eab408cfe7
I0307 10:28:17.492429 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:17.492435 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:17.492439 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:17.492445 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:17.492449 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:17.492517 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:17.492711 7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-scheduler-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:17.492718 7018 pod_ready.go:81] duration metric: took 400.162814ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
E0307 10:28:17.492724 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-scheduler-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:17.492729 7018 pod_ready.go:38] duration metric: took 1.8409126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:17.492740 7018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0307 10:28:17.500400 7018 command_runner.go:130] > -16
I0307 10:28:17.500574 7018 ops.go:34] apiserver oom_adj: -16
I0307 10:28:17.500584 7018 kubeadm.go:637] restartCluster took 20.241085671s
I0307 10:28:17.500589 7018 kubeadm.go:403] StartCluster complete in 20.26361982s
I0307 10:28:17.500600 7018 settings.go:142] acquiring lock: {Name:mk4d055ee1d778ec2752c0ce26b6fb536462adb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 10:28:17.500678 7018 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:28:17.501023 7018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15985-3430/kubeconfig: {Name:mkea569ea3041d84fd3aeaa788f308c9891aa7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 10:28:17.501262 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0307 10:28:17.501294 7018 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0307 10:28:17.546290 7018 out.go:177] * Enabled addons:
I0307 10:28:17.501457 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:28:17.501669 7018 loader.go:373] Config loaded from file: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:28:17.583590 7018 addons.go:499] enable addons completed in 82.276784ms: enabled=[]
I0307 10:28:17.583795 7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 10:28:17.584004 7018 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0307 10:28:17.584011 7018 round_trippers.go:469] Request Headers:
I0307 10:28:17.584017 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:17.584022 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:17.585901 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:17.585911 7018 round_trippers.go:577] Response Headers:
I0307 10:28:17.585917 7018 round_trippers.go:580] Audit-Id: 381c106f-61b9-4164-8d45-b690984d5352
I0307 10:28:17.585927 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:17.585933 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:17.585937 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:17.585942 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:17.585947 7018 round_trippers.go:580] Content-Length: 292
I0307 10:28:17.585952 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:17.585965 7018 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9058bb7-5525-4245-a92a-3b0f0144c5d4","resourceVersion":"1033","creationTimestamp":"2023-03-07T18:18:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0307 10:28:17.586053 7018 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-260000" context rescaled to 1 replicas
I0307 10:28:17.586069 7018 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0307 10:28:17.598551 7018 command_runner.go:130] > apiVersion: v1
I0307 10:28:17.607409 7018 command_runner.go:130] > data:
I0307 10:28:17.607416 7018 command_runner.go:130] > Corefile: |
I0307 10:28:17.607423 7018 command_runner.go:130] > .:53 {
I0307 10:28:17.607394 7018 out.go:177] * Verifying Kubernetes components...
I0307 10:28:17.607432 7018 command_runner.go:130] > log
I0307 10:28:17.665368 7018 command_runner.go:130] > errors
I0307 10:28:17.665380 7018 command_runner.go:130] > health {
I0307 10:28:17.665387 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 10:28:17.665390 7018 command_runner.go:130] > lameduck 5s
I0307 10:28:17.665471 7018 command_runner.go:130] > }
I0307 10:28:17.665485 7018 command_runner.go:130] > ready
I0307 10:28:17.665501 7018 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0307 10:28:17.665515 7018 command_runner.go:130] > pods insecure
I0307 10:28:17.665530 7018 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0307 10:28:17.665540 7018 command_runner.go:130] > ttl 30
I0307 10:28:17.665547 7018 command_runner.go:130] > }
I0307 10:28:17.665555 7018 command_runner.go:130] > prometheus :9153
I0307 10:28:17.665561 7018 command_runner.go:130] > hosts {
I0307 10:28:17.665581 7018 command_runner.go:130] > 192.168.64.1 host.minikube.internal
I0307 10:28:17.665589 7018 command_runner.go:130] > fallthrough
I0307 10:28:17.665596 7018 command_runner.go:130] > }
I0307 10:28:17.665604 7018 command_runner.go:130] > forward . /etc/resolv.conf {
I0307 10:28:17.665613 7018 command_runner.go:130] > max_concurrent 1000
I0307 10:28:17.665622 7018 command_runner.go:130] > }
I0307 10:28:17.665633 7018 command_runner.go:130] > cache 30
I0307 10:28:17.665648 7018 command_runner.go:130] > loop
I0307 10:28:17.665659 7018 command_runner.go:130] > reload
I0307 10:28:17.665673 7018 command_runner.go:130] > loadbalance
I0307 10:28:17.665700 7018 command_runner.go:130] > }
I0307 10:28:17.665714 7018 command_runner.go:130] > kind: ConfigMap
I0307 10:28:17.665724 7018 command_runner.go:130] > metadata:
I0307 10:28:17.665738 7018 command_runner.go:130] > creationTimestamp: "2023-03-07T18:18:28Z"
I0307 10:28:17.665750 7018 command_runner.go:130] > name: coredns
I0307 10:28:17.665761 7018 command_runner.go:130] > namespace: kube-system
I0307 10:28:17.665769 7018 command_runner.go:130] > resourceVersion: "361"
I0307 10:28:17.665778 7018 command_runner.go:130] > uid: ab4f9271-2ad1-469a-9991-ac0e7cd4eee1
I0307 10:28:17.665875 7018 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0307 10:28:17.677281 7018 node_ready.go:35] waiting up to 6m0s for node "multinode-260000" to be "Ready" ...
I0307 10:28:17.688141 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:17.688153 7018 round_trippers.go:469] Request Headers:
I0307 10:28:17.688160 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:17.688165 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:17.699560 7018 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0307 10:28:17.699573 7018 round_trippers.go:577] Response Headers:
I0307 10:28:17.699579 7018 round_trippers.go:580] Audit-Id: b0a8d418-5306-402d-aafe-b01480d098d1
I0307 10:28:17.699584 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:17.699588 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:17.699594 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:17.699602 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:17.699607 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:17.699666 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:18.201280 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:18.201301 7018 round_trippers.go:469] Request Headers:
I0307 10:28:18.201313 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:18.201324 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:18.205520 7018 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0307 10:28:18.205536 7018 round_trippers.go:577] Response Headers:
I0307 10:28:18.205545 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:18 GMT
I0307 10:28:18.205551 7018 round_trippers.go:580] Audit-Id: 93568139-27e9-412b-aabc-a063cf381701
I0307 10:28:18.205556 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:18.205560 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:18.205566 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:18.205571 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:18.205679 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:18.700510 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:18.700532 7018 round_trippers.go:469] Request Headers:
I0307 10:28:18.700545 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:18.700556 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:18.703654 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:18.703670 7018 round_trippers.go:577] Response Headers:
I0307 10:28:18.703678 7018 round_trippers.go:580] Audit-Id: fe05d8ff-851d-43ec-87d1-ea8137b7dbe8
I0307 10:28:18.703684 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:18.703691 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:18.703714 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:18.703725 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:18.703732 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:18 GMT
I0307 10:28:18.703813 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:19.202177 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:19.202200 7018 round_trippers.go:469] Request Headers:
I0307 10:28:19.202214 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:19.202227 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:19.205274 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:19.205290 7018 round_trippers.go:577] Response Headers:
I0307 10:28:19.205298 7018 round_trippers.go:580] Audit-Id: 01e6aee3-dfa5-4ab3-b092-2707828ba795
I0307 10:28:19.205331 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:19.205342 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:19.205349 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:19.205357 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:19.205364 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:19 GMT
I0307 10:28:19.205470 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:19.700708 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:19.700729 7018 round_trippers.go:469] Request Headers:
I0307 10:28:19.700741 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:19.700751 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:19.703406 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:19.703422 7018 round_trippers.go:577] Response Headers:
I0307 10:28:19.703431 7018 round_trippers.go:580] Audit-Id: 3a975007-4ad9-4952-af4f-5375799e6a1a
I0307 10:28:19.703439 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:19.703445 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:19.703452 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:19.703458 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:19.703466 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:19 GMT
I0307 10:28:19.703543 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:19.703788 7018 node_ready.go:58] node "multinode-260000" has status "Ready":"False"
I0307 10:28:20.200489 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:20.200509 7018 round_trippers.go:469] Request Headers:
I0307 10:28:20.200521 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:20.200531 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:20.203162 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:20.203178 7018 round_trippers.go:577] Response Headers:
I0307 10:28:20.203186 7018 round_trippers.go:580] Audit-Id: a8a0b987-0c00-4eb2-84cc-bb8ba63cb67a
I0307 10:28:20.203193 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:20.203202 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:20.203212 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:20.203220 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:20.203228 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:20 GMT
I0307 10:28:20.203489 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:20.700672 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:20.700696 7018 round_trippers.go:469] Request Headers:
I0307 10:28:20.700709 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:20.700725 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:20.703549 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:20.703565 7018 round_trippers.go:577] Response Headers:
I0307 10:28:20.703573 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:20.703580 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:20.703586 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:20.703593 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:20 GMT
I0307 10:28:20.703599 7018 round_trippers.go:580] Audit-Id: efe8aac9-6cb0-4496-83f5-15dd81197a83
I0307 10:28:20.703607 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:20.703677 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:21.201352 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:21.201373 7018 round_trippers.go:469] Request Headers:
I0307 10:28:21.201385 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:21.201395 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:21.204173 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:21.204190 7018 round_trippers.go:577] Response Headers:
I0307 10:28:21.204197 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:21 GMT
I0307 10:28:21.204205 7018 round_trippers.go:580] Audit-Id: be92e2ce-4712-4f1e-861a-703e11d6cba4
I0307 10:28:21.204220 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:21.204229 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:21.204235 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:21.204243 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:21.204341 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:21.700804 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:21.700827 7018 round_trippers.go:469] Request Headers:
I0307 10:28:21.700840 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:21.700851 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:21.703563 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:21.703580 7018 round_trippers.go:577] Response Headers:
I0307 10:28:21.703588 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:21.703595 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:21.703602 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:21 GMT
I0307 10:28:21.703609 7018 round_trippers.go:580] Audit-Id: d76a302b-b114-4fb6-a945-db5c79d73c04
I0307 10:28:21.703616 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:21.703622 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:21.703693 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:21.703979 7018 node_ready.go:58] node "multinode-260000" has status "Ready":"False"
I0307 10:28:22.200196 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:22.200216 7018 round_trippers.go:469] Request Headers:
I0307 10:28:22.200229 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:22.200239 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:22.202586 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:22.202599 7018 round_trippers.go:577] Response Headers:
I0307 10:28:22.202606 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:22.202614 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:22 GMT
I0307 10:28:22.202622 7018 round_trippers.go:580] Audit-Id: 4ff0cc55-c046-416f-9185-daae0bebce4a
I0307 10:28:22.202632 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:22.202639 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:22.202696 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:22.202811 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:22.700709 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:22.700730 7018 round_trippers.go:469] Request Headers:
I0307 10:28:22.700742 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:22.700752 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:22.702936 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:22.723882 7018 round_trippers.go:577] Response Headers:
I0307 10:28:22.723896 7018 round_trippers.go:580] Audit-Id: 29769d58-0043-4d39-82f0-cccd4df4015a
I0307 10:28:22.723957 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:22.723969 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:22.723978 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:22.723988 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:22.723998 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:22 GMT
I0307 10:28:22.724094 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:23.200620 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:23.200644 7018 round_trippers.go:469] Request Headers:
I0307 10:28:23.200657 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:23.200667 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:23.203465 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:23.203481 7018 round_trippers.go:577] Response Headers:
I0307 10:28:23.203489 7018 round_trippers.go:580] Audit-Id: 9e76918b-04a7-460f-b7a3-1bb26e8c0971
I0307 10:28:23.203496 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:23.203502 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:23.203510 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:23.203517 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:23.203523 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:23 GMT
I0307 10:28:23.203617 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:23.700169 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:23.700191 7018 round_trippers.go:469] Request Headers:
I0307 10:28:23.700203 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:23.700213 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:23.703029 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:23.703045 7018 round_trippers.go:577] Response Headers:
I0307 10:28:23.703053 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:23.703059 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:23.703067 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:23.703076 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:23.703088 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:23 GMT
I0307 10:28:23.703098 7018 round_trippers.go:580] Audit-Id: ef8f12d5-7107-46fa-a902-ce29a6cd21c5
I0307 10:28:23.703227 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:23.703480 7018 node_ready.go:49] node "multinode-260000" has status "Ready":"True"
I0307 10:28:23.703494 7018 node_ready.go:38] duration metric: took 6.026171359s waiting for node "multinode-260000" to be "Ready" ...
I0307 10:28:23.703502 7018 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:23.703549 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:23.703555 7018 round_trippers.go:469] Request Headers:
I0307 10:28:23.703563 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:23.703572 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:23.705759 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:23.705769 7018 round_trippers.go:577] Response Headers:
I0307 10:28:23.705780 7018 round_trippers.go:580] Audit-Id: 67287338-b563-4ece-963d-6a23473c12f5
I0307 10:28:23.705788 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:23.705795 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:23.705804 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:23.705811 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:23.705818 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:23 GMT
I0307 10:28:23.706556 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1094"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83638 chars]
I0307 10:28:23.708320 7018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
I0307 10:28:23.708353 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:23.708358 7018 round_trippers.go:469] Request Headers:
I0307 10:28:23.708374 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:23.708381 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:23.709654 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:23.709668 7018 round_trippers.go:577] Response Headers:
I0307 10:28:23.709674 7018 round_trippers.go:580] Audit-Id: 31e97546-40fd-4948-9b6f-419bdad39a05
I0307 10:28:23.709680 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:23.709685 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:23.709690 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:23.709696 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:23.709701 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:23 GMT
I0307 10:28:23.709974 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:23.710200 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:23.710205 7018 round_trippers.go:469] Request Headers:
I0307 10:28:23.710212 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:23.710218 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:23.711266 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:23.711276 7018 round_trippers.go:577] Response Headers:
I0307 10:28:23.711284 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:23.711291 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:23.711299 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:23.711307 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:23.711316 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:23 GMT
I0307 10:28:23.711324 7018 round_trippers.go:580] Audit-Id: ef253b5e-8ae9-4c22-97b4-635ece1c07f1
I0307 10:28:23.711443 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:24.211832 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:24.211854 7018 round_trippers.go:469] Request Headers:
I0307 10:28:24.211868 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:24.211879 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:24.214134 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:24.214147 7018 round_trippers.go:577] Response Headers:
I0307 10:28:24.214155 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:24.214161 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:24.214169 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:24 GMT
I0307 10:28:24.214176 7018 round_trippers.go:580] Audit-Id: 7cceac8c-72f2-43b3-a70c-da8298a351ea
I0307 10:28:24.214183 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:24.214189 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:24.214267 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:24.214622 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:24.214631 7018 round_trippers.go:469] Request Headers:
I0307 10:28:24.214639 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:24.214647 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:24.216139 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:24.216148 7018 round_trippers.go:577] Response Headers:
I0307 10:28:24.216154 7018 round_trippers.go:580] Audit-Id: 651af490-ed9e-4eba-a495-32b2210d00c4
I0307 10:28:24.216159 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:24.216167 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:24.216176 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:24.216187 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:24.216193 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:24 GMT
I0307 10:28:24.216294 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:24.712583 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:24.712604 7018 round_trippers.go:469] Request Headers:
I0307 10:28:24.712617 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:24.712627 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:24.715128 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:24.715141 7018 round_trippers.go:577] Response Headers:
I0307 10:28:24.715151 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:24.715174 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:24.715202 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:24.715215 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:24 GMT
I0307 10:28:24.715229 7018 round_trippers.go:580] Audit-Id: 64f8c7b5-e206-4888-b04e-57f95c098459
I0307 10:28:24.715263 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:24.715362 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:24.715724 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:24.715733 7018 round_trippers.go:469] Request Headers:
I0307 10:28:24.715741 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:24.715748 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:24.717117 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:24.717131 7018 round_trippers.go:577] Response Headers:
I0307 10:28:24.717139 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:24.717149 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:24.717158 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:24 GMT
I0307 10:28:24.717165 7018 round_trippers.go:580] Audit-Id: 39facfb8-6882-4093-a54a-be9e41cdcd8a
I0307 10:28:24.717189 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:24.717203 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:24.717297 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:25.211941 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:25.211961 7018 round_trippers.go:469] Request Headers:
I0307 10:28:25.211973 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:25.211984 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:25.214996 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:25.215012 7018 round_trippers.go:577] Response Headers:
I0307 10:28:25.215056 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:25.215076 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:25.215089 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:25.215121 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:25 GMT
I0307 10:28:25.215133 7018 round_trippers.go:580] Audit-Id: eab464a3-fd8c-4abd-92da-a9e3fab09b87
I0307 10:28:25.215153 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:25.215232 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:25.215588 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:25.215596 7018 round_trippers.go:469] Request Headers:
I0307 10:28:25.215604 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:25.215611 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:25.216989 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:25.217000 7018 round_trippers.go:577] Response Headers:
I0307 10:28:25.217005 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:25.217010 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:25 GMT
I0307 10:28:25.217021 7018 round_trippers.go:580] Audit-Id: 1b48fc62-d0ae-42f1-a567-d263b0778b46
I0307 10:28:25.217026 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:25.217031 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:25.217038 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:25.217228 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:25.713156 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:25.713175 7018 round_trippers.go:469] Request Headers:
I0307 10:28:25.713187 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:25.713197 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:25.715881 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:25.715901 7018 round_trippers.go:577] Response Headers:
I0307 10:28:25.715913 7018 round_trippers.go:580] Audit-Id: b458a53f-cebf-4dba-b1b0-795a83b24bef
I0307 10:28:25.715924 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:25.715933 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:25.715939 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:25.715946 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:25.715956 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:25 GMT
I0307 10:28:25.716134 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:25.716499 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:25.716508 7018 round_trippers.go:469] Request Headers:
I0307 10:28:25.716516 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:25.716523 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:25.717669 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:25.717677 7018 round_trippers.go:577] Response Headers:
I0307 10:28:25.717683 7018 round_trippers.go:580] Audit-Id: 1eb8ab80-758c-4e81-8dcb-159f98be89b6
I0307 10:28:25.717691 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:25.717698 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:25.717705 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:25.717711 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:25.717717 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:25 GMT
I0307 10:28:25.717847 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:25.718043 7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
I0307 10:28:26.211810 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:26.211826 7018 round_trippers.go:469] Request Headers:
I0307 10:28:26.211833 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:26.211854 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:26.217580 7018 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0307 10:28:26.217593 7018 round_trippers.go:577] Response Headers:
I0307 10:28:26.217599 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:26.217624 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:26 GMT
I0307 10:28:26.217634 7018 round_trippers.go:580] Audit-Id: 25844fb6-cd84-4dd3-af18-9f89ee6d5a04
I0307 10:28:26.217641 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:26.217646 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:26.217651 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:26.218222 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:26.218502 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:26.218509 7018 round_trippers.go:469] Request Headers:
I0307 10:28:26.218515 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:26.218520 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:26.223546 7018 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0307 10:28:26.223558 7018 round_trippers.go:577] Response Headers:
I0307 10:28:26.223563 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:26 GMT
I0307 10:28:26.223568 7018 round_trippers.go:580] Audit-Id: bf250b8a-6074-45b3-9f33-45ad42a6a343
I0307 10:28:26.223573 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:26.223578 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:26.223582 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:26.223587 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:26.224042 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:26.713218 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:26.713243 7018 round_trippers.go:469] Request Headers:
I0307 10:28:26.713255 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:26.713265 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:26.716102 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:26.716121 7018 round_trippers.go:577] Response Headers:
I0307 10:28:26.716129 7018 round_trippers.go:580] Audit-Id: 219d5f63-3a7c-44c7-8b51-2921f95c2710
I0307 10:28:26.716136 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:26.716144 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:26.716151 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:26.716157 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:26.716165 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:26 GMT
I0307 10:28:26.716247 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:26.716596 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:26.716604 7018 round_trippers.go:469] Request Headers:
I0307 10:28:26.716612 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:26.716619 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:26.718244 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:26.718252 7018 round_trippers.go:577] Response Headers:
I0307 10:28:26.718258 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:26.718264 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:26.718274 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:26.718280 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:26.718288 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:26 GMT
I0307 10:28:26.718293 7018 round_trippers.go:580] Audit-Id: ad769d45-1dbe-4f0f-bad4-953da8623939
I0307 10:28:26.718441 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:27.212704 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:27.212727 7018 round_trippers.go:469] Request Headers:
I0307 10:28:27.212739 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:27.212749 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:27.215311 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:27.215337 7018 round_trippers.go:577] Response Headers:
I0307 10:28:27.215345 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:27.215353 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:27.215361 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:27 GMT
I0307 10:28:27.215367 7018 round_trippers.go:580] Audit-Id: 36856e4f-a7e1-45d6-97ce-8f885ac8c841
I0307 10:28:27.215374 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:27.215381 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:27.215565 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:27.215939 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:27.215948 7018 round_trippers.go:469] Request Headers:
I0307 10:28:27.215956 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:27.215964 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:27.217347 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:27.217354 7018 round_trippers.go:577] Response Headers:
I0307 10:28:27.217362 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:27.217368 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:27 GMT
I0307 10:28:27.217374 7018 round_trippers.go:580] Audit-Id: d6676113-bd9a-4eaf-ba1b-019818744e42
I0307 10:28:27.217381 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:27.217389 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:27.217404 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:27.217556 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:27.711824 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:27.724865 7018 round_trippers.go:469] Request Headers:
I0307 10:28:27.724880 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:27.724887 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:27.726579 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:27.726589 7018 round_trippers.go:577] Response Headers:
I0307 10:28:27.726594 7018 round_trippers.go:580] Audit-Id: 0d01fa41-8246-4722-9399-93a5592f6b29
I0307 10:28:27.726599 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:27.726606 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:27.726613 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:27.726619 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:27.726624 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:27 GMT
I0307 10:28:27.726876 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:27.727175 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:27.727181 7018 round_trippers.go:469] Request Headers:
I0307 10:28:27.727187 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:27.727192 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:27.728314 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:27.728322 7018 round_trippers.go:577] Response Headers:
I0307 10:28:27.728334 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:27.728347 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:27.728353 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:27.728370 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:27 GMT
I0307 10:28:27.728379 7018 round_trippers.go:580] Audit-Id: 0e3e9ef9-ecac-45df-aee2-aff56bc03a97
I0307 10:28:27.728391 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:27.728478 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:27.728664 7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
I0307 10:28:28.212950 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:28.212969 7018 round_trippers.go:469] Request Headers:
I0307 10:28:28.212982 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:28.212992 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:28.216019 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:28.216035 7018 round_trippers.go:577] Response Headers:
I0307 10:28:28.216043 7018 round_trippers.go:580] Audit-Id: 24e3382f-877e-4bd3-9d01-53648e905133
I0307 10:28:28.216051 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:28.216057 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:28.216064 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:28.216072 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:28.216078 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:28 GMT
I0307 10:28:28.216218 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:28.216592 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:28.216601 7018 round_trippers.go:469] Request Headers:
I0307 10:28:28.216610 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:28.216617 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:28.218098 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:28.218109 7018 round_trippers.go:577] Response Headers:
I0307 10:28:28.218116 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:28 GMT
I0307 10:28:28.218121 7018 round_trippers.go:580] Audit-Id: ba13bf42-a23e-4b8b-b82d-f134c64fb02d
I0307 10:28:28.218133 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:28.218139 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:28.218144 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:28.218149 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:28.218380 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:28.713844 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:28.713872 7018 round_trippers.go:469] Request Headers:
I0307 10:28:28.713886 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:28.713897 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:28.717059 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:28.717075 7018 round_trippers.go:577] Response Headers:
I0307 10:28:28.717082 7018 round_trippers.go:580] Audit-Id: 2d17ebc7-34f0-4220-a01c-eba9dc18629b
I0307 10:28:28.717089 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:28.717096 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:28.717102 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:28.717109 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:28.717115 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:28 GMT
I0307 10:28:28.717206 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:28.717584 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:28.717593 7018 round_trippers.go:469] Request Headers:
I0307 10:28:28.717601 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:28.717609 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:28.718961 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:28.718971 7018 round_trippers.go:577] Response Headers:
I0307 10:28:28.718978 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:28.718982 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:28.718987 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:28.718992 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:28 GMT
I0307 10:28:28.718997 7018 round_trippers.go:580] Audit-Id: 1a95c19b-155c-4919-8f52-e4a21e53e43d
I0307 10:28:28.719002 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:28.719162 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:29.212285 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:29.212298 7018 round_trippers.go:469] Request Headers:
I0307 10:28:29.212305 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:29.212310 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:29.214049 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:29.214059 7018 round_trippers.go:577] Response Headers:
I0307 10:28:29.214065 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:29.214070 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:29.214075 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:29.214080 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:29 GMT
I0307 10:28:29.214087 7018 round_trippers.go:580] Audit-Id: 5902e368-f17f-4c82-9c7c-675d086888dd
I0307 10:28:29.214092 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:29.214228 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:29.214511 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:29.214517 7018 round_trippers.go:469] Request Headers:
I0307 10:28:29.214523 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:29.214529 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:29.215699 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:29.215709 7018 round_trippers.go:577] Response Headers:
I0307 10:28:29.215716 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:29.215723 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:29 GMT
I0307 10:28:29.215729 7018 round_trippers.go:580] Audit-Id: b6d6f5f7-09c3-4195-a4c1-845aef7ffc32
I0307 10:28:29.215734 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:29.215740 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:29.215747 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:29.215925 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:29.713052 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:29.713064 7018 round_trippers.go:469] Request Headers:
I0307 10:28:29.713070 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:29.713076 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:29.714443 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:29.714452 7018 round_trippers.go:577] Response Headers:
I0307 10:28:29.714457 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:29.714463 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:29.714468 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:29.714479 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:29 GMT
I0307 10:28:29.714484 7018 round_trippers.go:580] Audit-Id: 9c79de10-38b6-4cc5-8a5c-f518875339a0
I0307 10:28:29.714489 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:29.714549 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:29.714827 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:29.714833 7018 round_trippers.go:469] Request Headers:
I0307 10:28:29.714839 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:29.714844 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:29.723979 7018 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0307 10:28:29.723993 7018 round_trippers.go:577] Response Headers:
I0307 10:28:29.724011 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:29.724019 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:29.724028 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:29.724034 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:29 GMT
I0307 10:28:29.724040 7018 round_trippers.go:580] Audit-Id: 23a3f013-edd3-4bde-b9dc-3cdee57361b7
I0307 10:28:29.724046 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:29.724143 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.211801 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:30.211812 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.211819 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.211824 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.213958 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:30.213967 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.213972 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.213979 7018 round_trippers.go:580] Audit-Id: e3914bca-23b4-48cb-b3f3-c3e31ebe9b8e
I0307 10:28:30.213984 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.213989 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.213994 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.213999 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.219685 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:30.219986 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.219995 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.220004 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.220012 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.221717 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.221732 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.221741 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.221756 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.221762 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.221769 7018 round_trippers.go:580] Audit-Id: f3b83e3d-bec0-444f-bd00-ec3be70f6d10
I0307 10:28:30.221777 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.221783 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.221864 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.222060 7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
I0307 10:28:30.712597 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:30.712622 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.712717 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.712731 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.716221 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:30.716239 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.716247 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.716256 7018 round_trippers.go:580] Audit-Id: c7b16bdb-1c9a-42a3-b989-2ef728451887
I0307 10:28:30.716263 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.716270 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.716278 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.716284 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.716375 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6489 chars]
I0307 10:28:30.716777 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.716785 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.716793 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.716801 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.718436 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.718450 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.718457 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.718466 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.718473 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.718480 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.718485 7018 round_trippers.go:580] Audit-Id: 405256c2-a3b7-4450-9419-3e5f6172aabd
I0307 10:28:30.718491 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.718618 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.718803 7018 pod_ready.go:92] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:30.718812 7018 pod_ready.go:81] duration metric: took 7.010451765s waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.718825 7018 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.718853 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
I0307 10:28:30.719043 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.719125 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.719139 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.721072 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.721084 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.721090 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.721095 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.721100 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.721105 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.721110 7018 round_trippers.go:580] Audit-Id: ea8580ee-1e6e-4f3b-8474-356c1d7d09d5
I0307 10:28:30.721114 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.721227 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"1080","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6056 chars]
I0307 10:28:30.721443 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.721450 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.721456 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.721461 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.722677 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.722687 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.722699 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.722710 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.722719 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.722725 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.722731 7018 round_trippers.go:580] Audit-Id: 9a6b5445-3298-4c53-9f39-0cfd9f3d0951
I0307 10:28:30.722738 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.722826 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.723009 7018 pod_ready.go:92] pod "etcd-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:30.723015 7018 pod_ready.go:81] duration metric: took 4.185851ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.723025 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.723049 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
I0307 10:28:30.723053 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.723059 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.723068 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.725808 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:30.725819 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.725824 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.725830 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.725835 7018 round_trippers.go:580] Audit-Id: 27751b68-dbeb-4139-b048-aa37ba96ce0d
I0307 10:28:30.725840 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.725844 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.725850 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.725930 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"1143","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7591 chars]
I0307 10:28:30.726162 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.726168 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.726173 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.726179 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.727114 7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0307 10:28:30.727123 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.727129 7018 round_trippers.go:580] Audit-Id: 09ac9355-1c65-4420-8f52-155883618aa6
I0307 10:28:30.727134 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.727140 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.727145 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.727150 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.727155 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.727288 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.727470 7018 pod_ready.go:92] pod "kube-apiserver-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:30.727476 7018 pod_ready.go:81] duration metric: took 4.446202ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.727481 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.727505 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
I0307 10:28:30.727510 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.727516 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.727522 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.728648 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.728659 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.728665 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.728670 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.728674 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.728679 7018 round_trippers.go:580] Audit-Id: 559a8b88-70d9-4098-a5fd-ce69e6fc06be
I0307 10:28:30.728684 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.728688 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.728916 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1131","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7159 chars]
I0307 10:28:30.729139 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.729145 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.729151 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.729157 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.730563 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.730570 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.730575 7018 round_trippers.go:580] Audit-Id: 8efa58ee-7b42-4ba5-a878-ad10e7d3e33b
I0307 10:28:30.730579 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.730584 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.730588 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.730593 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.730599 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.730701 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.730866 7018 pod_ready.go:92] pod "kube-controller-manager-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:30.730872 7018 pod_ready.go:81] duration metric: took 3.385852ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.730877 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.730902 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
I0307 10:28:30.730906 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.730912 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.730918 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.731885 7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0307 10:28:30.731894 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.731900 7018 round_trippers.go:580] Audit-Id: ffc44502-d870-437e-9544-bf450ca2b814
I0307 10:28:30.731906 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.731914 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.731920 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.731925 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.731930 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.732036 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"1061","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
I0307 10:28:30.732243 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.732248 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.732255 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.732260 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.733218 7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0307 10:28:30.733226 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.733232 7018 round_trippers.go:580] Audit-Id: 3937160f-ce1c-4927-8fe0-6e7893d1567c
I0307 10:28:30.733237 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.733244 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.733248 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.733253 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.733258 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.733356 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.733519 7018 pod_ready.go:92] pod "kube-proxy-8qwhq" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:30.733525 7018 pod_ready.go:81] duration metric: took 2.642988ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.733531 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.912636 7018 request.go:622] Waited for 179.066998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:30.912685 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:30.912694 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.912778 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.912791 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.915495 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:30.915507 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.915515 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.915522 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.915530 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.915536 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.915544 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:31 GMT
I0307 10:28:30.915550 7018 round_trippers.go:580] Audit-Id: 3ae79f8d-1535-4d8e-a180-5f18227960da
I0307 10:28:30.915655 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"469","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
I0307 10:28:31.114599 7018 request.go:622] Waited for 198.634122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:31.114628 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:31.114633 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.114642 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.114649 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.116473 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:31.116483 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.116488 7018 round_trippers.go:580] Audit-Id: e955a99c-57ac-4ae0-a513-9afa809a5caf
I0307 10:28:31.116493 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.116498 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.116503 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.116509 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.116513 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:31 GMT
I0307 10:28:31.116688 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b","resourceVersion":"812","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4512 chars]
I0307 10:28:31.116864 7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:31.116870 7018 pod_ready.go:81] duration metric: took 383.333062ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:31.116876 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:31.314683 7018 request.go:622] Waited for 197.728848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:31.314736 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:31.314770 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.314788 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.314803 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.317976 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:31.317992 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.318000 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:31 GMT
I0307 10:28:31.318029 7018 round_trippers.go:580] Audit-Id: a357c92b-2320-4582-b9e7-f62d05a9d4e3
I0307 10:28:31.318042 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.318051 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.318057 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.318064 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.318199 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I0307 10:28:31.514054 7018 request.go:622] Waited for 195.505176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:31.514146 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:31.514242 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.514254 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.514267 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.517133 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:31.517148 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.517156 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.517163 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.517171 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.517178 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.517184 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:31 GMT
I0307 10:28:31.517191 7018 round_trippers.go:580] Audit-Id: 532579cf-d5cc-41c0-b38e-54a2f800d22f
I0307 10:28:31.517302 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1109","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4330 chars]
I0307 10:28:31.517527 7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:31.517534 7018 pod_ready.go:81] duration metric: took 400.651378ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:31.517542 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:31.713858 7018 request.go:622] Waited for 196.240525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:31.713912 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:31.713952 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.713969 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.713983 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.716855 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:31.716871 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.716879 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.716894 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:31 GMT
I0307 10:28:31.716902 7018 round_trippers.go:580] Audit-Id: 291b5d9b-3357-4be3-9d0c-89832cae8ad3
I0307 10:28:31.716910 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.716917 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.716924 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.717008 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1139","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4889 chars]
I0307 10:28:31.912715 7018 request.go:622] Waited for 195.420936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:31.912766 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:31.912775 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.912789 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.912852 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.915496 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:31.915515 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.915523 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:31.915532 7018 round_trippers.go:580] Audit-Id: ab49a22e-b0ca-4460-8af6-f31980cc83e0
I0307 10:28:31.915539 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.915547 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.915558 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.915565 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.915671 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:31.915930 7018 pod_ready.go:92] pod "kube-scheduler-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:31.915938 7018 pod_ready.go:81] duration metric: took 398.388063ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:31.915946 7018 pod_ready.go:38] duration metric: took 8.212399171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:31.915959 7018 api_server.go:51] waiting for apiserver process to appear ...
I0307 10:28:31.916021 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:31.926000 7018 command_runner.go:130] > 1604
I0307 10:28:31.926101 7018 api_server.go:71] duration metric: took 14.339953362s to wait for apiserver process to appear ...
I0307 10:28:31.926109 7018 api_server.go:87] waiting for apiserver healthz status ...
I0307 10:28:31.926115 7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
I0307 10:28:31.929766 7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 200:
ok
I0307 10:28:31.929791 7018 round_trippers.go:463] GET https://192.168.64.12:8443/version
I0307 10:28:31.929796 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.929803 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.929809 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.930265 7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0307 10:28:31.930272 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.930277 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.930283 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.930291 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.930297 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.930302 7018 round_trippers.go:580] Content-Length: 263
I0307 10:28:31.930307 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:31.930313 7018 round_trippers.go:580] Audit-Id: 416b7f0f-553f-48b8-8633-6be8897b3ddf
I0307 10:28:31.930330 7018 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.2",
"gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
"gitTreeState": "clean",
"buildDate": "2023-02-22T13:32:22Z",
"goVersion": "go1.19.6",
"compiler": "gc",
"platform": "linux/amd64"
}
I0307 10:28:31.930354 7018 api_server.go:140] control plane version: v1.26.2
I0307 10:28:31.930360 7018 api_server.go:130] duration metric: took 4.24718ms to wait for apiserver health ...
I0307 10:28:31.930364 7018 system_pods.go:43] waiting for kube-system pods to appear ...
I0307 10:28:32.112716 7018 request.go:622] Waited for 182.311615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:32.112771 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:32.112780 7018 round_trippers.go:469] Request Headers:
I0307 10:28:32.112834 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:32.112848 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:32.116811 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:32.116841 7018 round_trippers.go:577] Response Headers:
I0307 10:28:32.116877 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:32.116904 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:32.116916 7018 round_trippers.go:580] Audit-Id: c5d1857d-a22f-42d9-aec9-08ad8e7331bd
I0307 10:28:32.116950 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:32.116966 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:32.116973 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:32.118187 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82836 chars]
I0307 10:28:32.119945 7018 system_pods.go:59] 12 kube-system pods found
I0307 10:28:32.119954 7018 system_pods.go:61] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
I0307 10:28:32.119958 7018 system_pods.go:61] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
I0307 10:28:32.119963 7018 system_pods.go:61] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
I0307 10:28:32.119967 7018 system_pods.go:61] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
I0307 10:28:32.119970 7018 system_pods.go:61] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
I0307 10:28:32.119975 7018 system_pods.go:61] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
I0307 10:28:32.119993 7018 system_pods.go:61] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running
I0307 10:28:32.120000 7018 system_pods.go:61] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
I0307 10:28:32.120004 7018 system_pods.go:61] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
I0307 10:28:32.120008 7018 system_pods.go:61] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
I0307 10:28:32.120011 7018 system_pods.go:61] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
I0307 10:28:32.120016 7018 system_pods.go:61] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
I0307 10:28:32.120019 7018 system_pods.go:74] duration metric: took 189.651129ms to wait for pod list to return data ...
I0307 10:28:32.120025 7018 default_sa.go:34] waiting for default service account to be created ...
I0307 10:28:32.313205 7018 request.go:622] Waited for 193.131438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
I0307 10:28:32.313251 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
I0307 10:28:32.313259 7018 round_trippers.go:469] Request Headers:
I0307 10:28:32.313271 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:32.313281 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:32.315756 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:32.315778 7018 round_trippers.go:577] Response Headers:
I0307 10:28:32.315809 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:32.315822 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:32.315830 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:32.315837 7018 round_trippers.go:580] Content-Length: 262
I0307 10:28:32.315843 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:32.315850 7018 round_trippers.go:580] Audit-Id: ac7a8c42-5ffa-402f-970f-d1d5a6d3058d
I0307 10:28:32.315857 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:32.315874 7018 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6e32b5cd-63bd-46a7-9ed5-ea842da6729c","resourceVersion":"325","creationTimestamp":"2023-03-07T18:18:42Z"}}]}
I0307 10:28:32.316001 7018 default_sa.go:45] found service account: "default"
I0307 10:28:32.316010 7018 default_sa.go:55] duration metric: took 195.9795ms for default service account to be created ...
I0307 10:28:32.316018 7018 system_pods.go:116] waiting for k8s-apps to be running ...
I0307 10:28:32.513632 7018 request.go:622] Waited for 197.482521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:32.513683 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:32.513691 7018 round_trippers.go:469] Request Headers:
I0307 10:28:32.513704 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:32.513718 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:32.517123 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:32.517133 7018 round_trippers.go:577] Response Headers:
I0307 10:28:32.517139 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:32.517144 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:32.517148 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:32.517154 7018 round_trippers.go:580] Audit-Id: c5f53d8f-ee73-49a6-be78-6ca8c2200a8e
I0307 10:28:32.517161 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:32.517168 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:32.517894 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82836 chars]
I0307 10:28:32.519632 7018 system_pods.go:86] 12 kube-system pods found
I0307 10:28:32.519641 7018 system_pods.go:89] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
I0307 10:28:32.519650 7018 system_pods.go:89] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
I0307 10:28:32.519654 7018 system_pods.go:89] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
I0307 10:28:32.519659 7018 system_pods.go:89] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
I0307 10:28:32.519664 7018 system_pods.go:89] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
I0307 10:28:32.519668 7018 system_pods.go:89] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
I0307 10:28:32.519671 7018 system_pods.go:89] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running
I0307 10:28:32.519675 7018 system_pods.go:89] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
I0307 10:28:32.519679 7018 system_pods.go:89] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
I0307 10:28:32.519683 7018 system_pods.go:89] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
I0307 10:28:32.519686 7018 system_pods.go:89] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
I0307 10:28:32.519690 7018 system_pods.go:89] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
I0307 10:28:32.519694 7018 system_pods.go:126] duration metric: took 203.671188ms to wait for k8s-apps to be running ...
I0307 10:28:32.519699 7018 system_svc.go:44] waiting for kubelet service to be running ....
I0307 10:28:32.519751 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 10:28:32.528776 7018 system_svc.go:56] duration metric: took 9.073723ms WaitForService to wait for kubelet.
I0307 10:28:32.528791 7018 kubeadm.go:578] duration metric: took 14.942639871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0307 10:28:32.528801 7018 node_conditions.go:102] verifying NodePressure condition ...
I0307 10:28:32.714684 7018 request.go:622] Waited for 185.826429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes
I0307 10:28:32.725835 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
I0307 10:28:32.725851 7018 round_trippers.go:469] Request Headers:
I0307 10:28:32.725863 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:32.725878 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:32.728446 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:32.728460 7018 round_trippers.go:577] Response Headers:
I0307 10:28:32.728468 7018 round_trippers.go:580] Audit-Id: baedd684-4a38-47c3-8b1a-5bac961a5fbc
I0307 10:28:32.728477 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:32.728490 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:32.728500 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:32.728507 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:32.728514 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:32.728762 7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16210 chars]
I0307 10:28:32.729257 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:32.729266 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:32.729274 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:32.729278 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:32.729282 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:32.729286 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:32.729289 7018 node_conditions.go:105] duration metric: took 200.482518ms to run NodePressure ...
I0307 10:28:32.729297 7018 start.go:228] waiting for startup goroutines ...
I0307 10:28:32.729302 7018 start.go:233] waiting for cluster config update ...
I0307 10:28:32.729308 7018 start.go:242] writing updated cluster config ...
I0307 10:28:32.729786 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:28:32.729851 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:28:32.751369 7018 out.go:177] * Starting worker node multinode-260000-m02 in cluster multinode-260000
I0307 10:28:32.794328 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:28:32.794413 7018 cache.go:57] Caching tarball of preloaded images
I0307 10:28:32.794583 7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0307 10:28:32.794601 7018 cache.go:60] Finished verifying existence of preloaded tar for v1.26.2 on docker
I0307 10:28:32.794723 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:28:32.795675 7018 cache.go:193] Successfully downloaded all kic artifacts
I0307 10:28:32.795702 7018 start.go:364] acquiring machines lock for multinode-260000-m02: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 10:28:32.795787 7018 start.go:368] acquired machines lock for "multinode-260000-m02" in 65.198µs
I0307 10:28:32.795817 7018 start.go:96] Skipping create...Using existing machine configuration
I0307 10:28:32.795824 7018 fix.go:55] fixHost starting: m02
I0307 10:28:32.796234 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:28:32.796271 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:28:32.804078 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51665
I0307 10:28:32.804430 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:28:32.804833 7018 main.go:141] libmachine: Using API Version 1
I0307 10:28:32.804855 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:28:32.805065 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:28:32.805179 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:32.805269 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetState
I0307 10:28:32.805361 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:28:32.805423 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid from json: 6295
I0307 10:28:32.806220 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid 6295 missing from process table
I0307 10:28:32.806256 7018 fix.go:103] recreateIfNeeded on multinode-260000-m02: state=Stopped err=<nil>
I0307 10:28:32.806268 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
W0307 10:28:32.806350 7018 fix.go:129] unexpected machine state, will restart: <nil>
I0307 10:28:32.827377 7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000-m02" ...
I0307 10:28:32.869734 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .Start
I0307 10:28:32.869997 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:28:32.870091 7018 main.go:141] libmachine: (multinode-260000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid
I0307 10:28:32.871656 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid 6295 missing from process table
I0307 10:28:32.871680 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | pid 6295 is in state "Stopped"
I0307 10:28:32.871712 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid...
I0307 10:28:32.871965 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Using UUID 835471be-bd14-11ed-9c3c-149d997fca88
I0307 10:28:32.899206 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Generated MAC ba:65:3c:6f:8d:dc
I0307 10:28:32.899232 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
I0307 10:28:32.899404 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"835471be-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000395b00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
I0307 10:28:32.899444 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"835471be-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000395b00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
I0307 10:28:32.899480 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "835471be-bd14-11ed-9c3c-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/multinode-260000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage,/Users/j
enkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
I0307 10:28:32.899519 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 835471be-bd14-11ed-9c3c-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/multinode-260000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/mult
inode-260000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
I0307 10:28:32.899533 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0307 10:28:32.900716 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Pid is 7098
I0307 10:28:32.901058 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Attempt 0
I0307 10:28:32.901070 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:28:32.901159 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid from json: 7098
I0307 10:28:32.902759 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Searching for ba:65:3c:6f:8d:dc in /var/db/dhcpd_leases ...
I0307 10:28:32.902821 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
I0307 10:28:32.902837 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d38e}
I0307 10:28:32.902848 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
I0307 10:28:32.902856 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
I0307 10:28:32.902881 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d194}
I0307 10:28:32.902892 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Found match: ba:65:3c:6f:8d:dc
I0307 10:28:32.902900 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | IP: 192.168.64.13
I0307 10:28:32.902925 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetConfigRaw
I0307 10:28:32.903499 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
I0307 10:28:32.903686 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:28:32.904005 7018 machine.go:88] provisioning docker machine ...
I0307 10:28:32.904016 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:32.904127 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
I0307 10:28:32.904238 7018 buildroot.go:166] provisioning hostname "multinode-260000-m02"
I0307 10:28:32.904248 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
I0307 10:28:32.904335 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:32.904423 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:32.904506 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:32.904579 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:32.904654 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:32.904766 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:32.905083 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:32.905099 7018 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-260000-m02 && echo "multinode-260000-m02" | sudo tee /etc/hostname
I0307 10:28:32.907073 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0307 10:28:32.914845 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0307 10:28:32.915562 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:28:32.915575 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:28:32.915583 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:28:32.915590 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:28:33.270333 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0307 10:28:33.270350 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0307 10:28:33.374324 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:28:33.374345 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:28:33.374362 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:28:33.374375 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:28:33.375209 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0307 10:28:33.375231 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0307 10:28:37.885819 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0307 10:28:37.885892 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0307 10:28:37.885906 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0307 10:28:43.994445 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000-m02
I0307 10:28:43.994460 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:43.994617 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:43.994725 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:43.994819 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:43.994903 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:43.995031 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:43.995375 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:43.995387 7018 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-260000-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-260000-m02' | sudo tee -a /etc/hosts;
fi
fi
I0307 10:28:44.074363 7018 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 10:28:44.074384 7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
I0307 10:28:44.074392 7018 buildroot.go:174] setting up certificates
I0307 10:28:44.074399 7018 provision.go:83] configureAuth start
I0307 10:28:44.074407 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
I0307 10:28:44.074531 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
I0307 10:28:44.074611 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:44.074689 7018 provision.go:138] copyHostCerts
I0307 10:28:44.074731 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:28:44.074787 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
I0307 10:28:44.074794 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:28:44.074898 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
I0307 10:28:44.075070 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:28:44.075104 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
I0307 10:28:44.075109 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:28:44.075176 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
I0307 10:28:44.075308 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:28:44.075341 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
I0307 10:28:44.075345 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:28:44.075412 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
I0307 10:28:44.075534 7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000-m02 san=[192.168.64.13 192.168.64.13 localhost 127.0.0.1 minikube multinode-260000-m02]
I0307 10:28:44.229773 7018 provision.go:172] copyRemoteCerts
I0307 10:28:44.229826 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 10:28:44.229842 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:44.229985 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:44.230082 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.230172 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:44.230271 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
I0307 10:28:44.272044 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0307 10:28:44.272115 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0307 10:28:44.288148 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
I0307 10:28:44.288225 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0307 10:28:44.303969 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0307 10:28:44.304037 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0307 10:28:44.319850 7018 provision.go:86] duration metric: configureAuth took 245.441923ms
I0307 10:28:44.319862 7018 buildroot.go:189] setting minikube options for container-runtime
I0307 10:28:44.320030 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:28:44.320045 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:44.320174 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:44.320276 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:44.320360 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.320463 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.320545 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:44.320659 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:44.320957 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:44.320966 7018 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0307 10:28:44.395776 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0307 10:28:44.395788 7018 buildroot.go:70] root file system type: tmpfs
I0307 10:28:44.395864 7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0307 10:28:44.395879 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:44.396009 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:44.396095 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.396175 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.396263 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:44.396386 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:44.396702 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:44.396747 7018 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.64.12"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0307 10:28:44.478924 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.64.12
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0307 10:28:44.478942 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:44.479070 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:44.479153 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.479233 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.479316 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:44.479441 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:44.479748 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:44.479760 7018 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0307 10:28:45.040521 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0307 10:28:45.040534 7018 machine.go:91] provisioned docker machine in 12.136465556s
I0307 10:28:45.040540 7018 start.go:300] post-start starting for "multinode-260000-m02" (driver="hyperkit")
I0307 10:28:45.040546 7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 10:28:45.040555 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:45.040748 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 10:28:45.040760 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:45.040882 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:45.040972 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:45.041059 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:45.041157 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
I0307 10:28:45.087397 7018 ssh_runner.go:195] Run: cat /etc/os-release
I0307 10:28:45.091149 7018 command_runner.go:130] > NAME=Buildroot
I0307 10:28:45.091158 7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
I0307 10:28:45.091162 7018 command_runner.go:130] > ID=buildroot
I0307 10:28:45.091166 7018 command_runner.go:130] > VERSION_ID=2021.02.12
I0307 10:28:45.091170 7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0307 10:28:45.091259 7018 info.go:137] Remote host: Buildroot 2021.02.12
I0307 10:28:45.091268 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
I0307 10:28:45.091351 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
I0307 10:28:45.091498 7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
I0307 10:28:45.091504 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
I0307 10:28:45.091663 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0307 10:28:45.100582 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
I0307 10:28:45.126802 7018 start.go:303] post-start completed in 86.252226ms
I0307 10:28:45.126814 7018 fix.go:57] fixHost completed within 12.330934005s
I0307 10:28:45.126826 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:45.126964 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:45.127056 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:45.127154 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:45.127232 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:45.127364 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:45.127672 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:45.127680 7018 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0307 10:28:45.202858 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213725.334485743
I0307 10:28:45.202870 7018 fix.go:207] guest clock: 1678213725.334485743
I0307 10:28:45.202880 7018 fix.go:220] Guest: 2023-03-07 10:28:45.334485743 -0800 PST Remote: 2023-03-07 10:28:45.126816 -0800 PST m=+87.461319305 (delta=207.669743ms)
I0307 10:28:45.202890 7018 fix.go:191] guest clock delta is within tolerance: 207.669743ms
I0307 10:28:45.202894 7018 start.go:83] releasing machines lock for "multinode-260000-m02", held for 12.407039272s
I0307 10:28:45.202911 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:45.203045 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
I0307 10:28:45.229173 7018 out.go:177] * Found network options:
I0307 10:28:45.249904 7018 out.go:177] - NO_PROXY=192.168.64.12
W0307 10:28:45.271748 7018 proxy.go:119] fail to check proxy env: Error ip not in block
I0307 10:28:45.271793 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:45.272543 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:45.272757 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:45.272892 7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0307 10:28:45.272940 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
W0307 10:28:45.273042 7018 proxy.go:119] fail to check proxy env: Error ip not in block
I0307 10:28:45.273135 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:45.273147 7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0307 10:28:45.273165 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:45.273342 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:45.273376 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:45.273607 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:45.273659 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:45.273827 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:45.273861 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
I0307 10:28:45.274044 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
I0307 10:28:45.313860 7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0307 10:28:45.314024 7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0307 10:28:45.314083 7018 ssh_runner.go:195] Run: which cri-dockerd
I0307 10:28:45.353726 7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0307 10:28:45.354872 7018 command_runner.go:130] > /usr/bin/cri-dockerd
I0307 10:28:45.355027 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0307 10:28:45.362451 7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0307 10:28:45.373398 7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 10:28:45.384177 7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0307 10:28:45.384307 7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0307 10:28:45.384316 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:28:45.384403 7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 10:28:45.401772 7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
I0307 10:28:45.401790 7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
I0307 10:28:45.401795 7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
I0307 10:28:45.401801 7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
I0307 10:28:45.401805 7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
I0307 10:28:45.401809 7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0307 10:28:45.401813 7018 command_runner.go:130] > registry.k8s.io/pause:3.9
I0307 10:28:45.401818 7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0307 10:28:45.401823 7018 command_runner.go:130] > registry.k8s.io/pause:3.6
I0307 10:28:45.401828 7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0307 10:28:45.401832 7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0307 10:28:45.402825 7018 docker.go:630] Got preloaded images: -- stdout --
kindest/kindnetd:v20230227-15197099
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0307 10:28:45.402834 7018 docker.go:560] Images already preloaded, skipping extraction
I0307 10:28:45.402840 7018 start.go:485] detecting cgroup driver to use...
I0307 10:28:45.402914 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:28:45.415287 7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0307 10:28:45.415302 7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0307 10:28:45.415537 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0307 10:28:45.422829 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 10:28:45.429702 7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0307 10:28:45.429750 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0307 10:28:45.436708 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:28:45.443666 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 10:28:45.450827 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:28:45.457881 7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 10:28:45.464910 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 10:28:45.471731 7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 10:28:45.477787 7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0307 10:28:45.477987 7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 10:28:45.484272 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:28:45.566893 7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 10:28:45.578247 7018 start.go:485] detecting cgroup driver to use...
I0307 10:28:45.578332 7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0307 10:28:45.587719 7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0307 10:28:45.588048 7018 command_runner.go:130] > [Unit]
I0307 10:28:45.588056 7018 command_runner.go:130] > Description=Docker Application Container Engine
I0307 10:28:45.588070 7018 command_runner.go:130] > Documentation=https://docs.docker.com
I0307 10:28:45.588078 7018 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0307 10:28:45.588085 7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0307 10:28:45.588091 7018 command_runner.go:130] > StartLimitBurst=3
I0307 10:28:45.588111 7018 command_runner.go:130] > StartLimitIntervalSec=60
I0307 10:28:45.588119 7018 command_runner.go:130] > [Service]
I0307 10:28:45.588126 7018 command_runner.go:130] > Type=notify
I0307 10:28:45.588130 7018 command_runner.go:130] > Restart=on-failure
I0307 10:28:45.588134 7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12
I0307 10:28:45.588141 7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0307 10:28:45.588148 7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0307 10:28:45.588153 7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0307 10:28:45.588159 7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0307 10:28:45.588164 7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0307 10:28:45.588170 7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0307 10:28:45.588176 7018 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0307 10:28:45.588189 7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0307 10:28:45.588195 7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0307 10:28:45.588199 7018 command_runner.go:130] > ExecStart=
I0307 10:28:45.588218 7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
I0307 10:28:45.588223 7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0307 10:28:45.588228 7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0307 10:28:45.588234 7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0307 10:28:45.588238 7018 command_runner.go:130] > LimitNOFILE=infinity
I0307 10:28:45.588247 7018 command_runner.go:130] > LimitNPROC=infinity
I0307 10:28:45.588253 7018 command_runner.go:130] > LimitCORE=infinity
I0307 10:28:45.588259 7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0307 10:28:45.588263 7018 command_runner.go:130] > # Only systemd 226 and above support this version.
I0307 10:28:45.588267 7018 command_runner.go:130] > TasksMax=infinity
I0307 10:28:45.588270 7018 command_runner.go:130] > TimeoutStartSec=0
I0307 10:28:45.588276 7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0307 10:28:45.588279 7018 command_runner.go:130] > Delegate=yes
I0307 10:28:45.588284 7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0307 10:28:45.588294 7018 command_runner.go:130] > KillMode=process
I0307 10:28:45.588298 7018 command_runner.go:130] > [Install]
I0307 10:28:45.588302 7018 command_runner.go:130] > WantedBy=multi-user.target
I0307 10:28:45.588380 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:28:45.599940 7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0307 10:28:45.612861 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:28:45.622327 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:28:45.630580 7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0307 10:28:45.653722 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:28:45.662024 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:28:45.674917 7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:28:45.674931 7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:28:45.674988 7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0307 10:28:45.756263 7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0307 10:28:45.846497 7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0307 10:28:45.846514 7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0307 10:28:45.858511 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:28:45.944748 7018 ssh_runner.go:195] Run: sudo systemctl restart docker
I0307 10:28:47.255144 7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.310371403s)
I0307 10:28:47.255214 7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 10:28:47.335677 7018 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0307 10:28:47.417454 7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 10:28:47.513228 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:28:47.598471 7018 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0307 10:28:47.611967 7018 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0307 10:28:47.612060 7018 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0307 10:28:47.616814 7018 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0307 10:28:47.616826 7018 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0307 10:28:47.616831 7018 command_runner.go:130] > Device: 16h/22d Inode: 852 Links: 1
I0307 10:28:47.616837 7018 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0307 10:28:47.616851 7018 command_runner.go:130] > Access: 2023-03-07 18:28:47.742167434 +0000
I0307 10:28:47.616856 7018 command_runner.go:130] > Modify: 2023-03-07 18:28:47.742167434 +0000
I0307 10:28:47.616860 7018 command_runner.go:130] > Change: 2023-03-07 18:28:47.744167434 +0000
I0307 10:28:47.616865 7018 command_runner.go:130] > Birth: -
I0307 10:28:47.617043 7018 start.go:553] Will wait 60s for crictl version
I0307 10:28:47.617089 7018 ssh_runner.go:195] Run: which crictl
I0307 10:28:47.619815 7018 command_runner.go:130] > /usr/bin/crictl
I0307 10:28:47.619873 7018 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0307 10:28:47.691285 7018 command_runner.go:130] > Version: 0.1.0
I0307 10:28:47.691297 7018 command_runner.go:130] > RuntimeName: docker
I0307 10:28:47.691301 7018 command_runner.go:130] > RuntimeVersion: 20.10.23
I0307 10:28:47.691305 7018 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0307 10:28:47.692228 7018 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0307 10:28:47.692301 7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 10:28:47.711035 7018 command_runner.go:130] > 20.10.23
I0307 10:28:47.728475 7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 10:28:47.749259 7018 command_runner.go:130] > 20.10.23
I0307 10:28:47.770120 7018 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
I0307 10:28:47.813210 7018 out.go:177] - env NO_PROXY=192.168.64.12
I0307 10:28:47.835385 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
I0307 10:28:47.835775 7018 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0307 10:28:47.840292 7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 10:28:47.848646 7018 certs.go:56] Setting up /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000 for IP: 192.168.64.13
I0307 10:28:47.848666 7018 certs.go:186] acquiring lock for shared ca certs: {Name:mk21aa92235e3b083ba3cf4a52527e5734aca22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 10:28:47.848814 7018 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key
I0307 10:28:47.848878 7018 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key
I0307 10:28:47.848891 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0307 10:28:47.848915 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0307 10:28:47.848940 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0307 10:28:47.848960 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0307 10:28:47.849045 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem (1338 bytes)
W0307 10:28:47.849088 7018 certs.go:397] ignoring /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903_empty.pem, impossibly tiny 0 bytes
I0307 10:28:47.849100 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem (1675 bytes)
I0307 10:28:47.849141 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem (1082 bytes)
I0307 10:28:47.849185 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem (1123 bytes)
I0307 10:28:47.849224 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem (1675 bytes)
I0307 10:28:47.849299 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem (1708 bytes)
I0307 10:28:47.849342 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0307 10:28:47.849367 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem -> /usr/share/ca-certificates/3903.pem
I0307 10:28:47.849386 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /usr/share/ca-certificates/39032.pem
I0307 10:28:47.849662 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0307 10:28:47.865455 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0307 10:28:47.881052 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0307 10:28:47.896926 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0307 10:28:47.912741 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0307 10:28:47.928528 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem --> /usr/share/ca-certificates/3903.pem (1338 bytes)
I0307 10:28:47.945013 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /usr/share/ca-certificates/39032.pem (1708 bytes)
I0307 10:28:47.960635 7018 ssh_runner.go:195] Run: openssl version
I0307 10:28:47.964021 7018 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0307 10:28:47.964272 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0307 10:28:47.971316 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0307 10:28:47.974134 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 10:28:47.974290 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 10:28:47.974333 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0307 10:28:47.977654 7018 command_runner.go:130] > b5213941
I0307 10:28:47.977920 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0307 10:28:47.984887 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3903.pem && ln -fs /usr/share/ca-certificates/3903.pem /etc/ssl/certs/3903.pem"
I0307 10:28:47.992249 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3903.pem
I0307 10:28:47.995266 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/3903.pem
I0307 10:28:47.995458 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/3903.pem
I0307 10:28:47.995499 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3903.pem
I0307 10:28:47.998865 7018 command_runner.go:130] > 51391683
I0307 10:28:47.999120 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3903.pem /etc/ssl/certs/51391683.0"
I0307 10:28:48.006141 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/39032.pem && ln -fs /usr/share/ca-certificates/39032.pem /etc/ssl/certs/39032.pem"
I0307 10:28:48.013240 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/39032.pem
I0307 10:28:48.016074 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/39032.pem
I0307 10:28:48.016260 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/39032.pem
I0307 10:28:48.016294 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/39032.pem
I0307 10:28:48.019631 7018 command_runner.go:130] > 3ec20f2e
I0307 10:28:48.019880 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/39032.pem /etc/ssl/certs/3ec20f2e.0"
I0307 10:28:48.026902 7018 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0307 10:28:48.048324 7018 command_runner.go:130] > cgroupfs
I0307 10:28:48.048980 7018 cni.go:84] Creating CNI manager for ""
I0307 10:28:48.048990 7018 cni.go:136] 3 nodes found, recommending kindnet
I0307 10:28:48.048997 7018 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0307 10:28:48.049008 7018 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.13 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-260000 NodeName:multinode-260000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0307 10:28:48.049099 7018 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.13
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-260000-m02"
kubeletExtraArgs:
node-ip: 192.168.64.13
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.12"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0307 10:28:48.049134 7018 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-260000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.13
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0307 10:28:48.049192 7018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0307 10:28:48.055441 7018 command_runner.go:130] > kubeadm
I0307 10:28:48.055448 7018 command_runner.go:130] > kubectl
I0307 10:28:48.055454 7018 command_runner.go:130] > kubelet
I0307 10:28:48.055533 7018 binaries.go:44] Found k8s binaries, skipping transfer
I0307 10:28:48.055575 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0307 10:28:48.061804 7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (453 bytes)
I0307 10:28:48.072809 7018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0307 10:28:48.083885 7018 ssh_runner.go:195] Run: grep 192.168.64.12 control-plane.minikube.internal$ /etc/hosts
I0307 10:28:48.086255 7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.12 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 10:28:48.093971 7018 host.go:66] Checking if "multinode-260000" exists ...
I0307 10:28:48.094151 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:28:48.094253 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:28:48.094274 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:28:48.101209 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51684
I0307 10:28:48.101550 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:28:48.101900 7018 main.go:141] libmachine: Using API Version 1
I0307 10:28:48.101916 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:28:48.102150 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:28:48.102258 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:28:48.102341 7018 start.go:301] JoinCluster: &{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP:}
I0307 10:28:48.102433 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm token create --print-join-command --ttl=0"
I0307 10:28:48.102443 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:28:48.102521 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:28:48.102622 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:28:48.102707 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:28:48.102782 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:28:48.189788 7018 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af
I0307 10:28:48.189814 7018 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0307 10:28:48.189833 7018 host.go:66] Checking if "multinode-260000" exists ...
I0307 10:28:48.190161 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:28:48.190186 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:28:48.196916 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51687
I0307 10:28:48.197249 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:28:48.197612 7018 main.go:141] libmachine: Using API Version 1
I0307 10:28:48.197624 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:28:48.197818 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:28:48.197901 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:28:48.198033 7018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl drain multinode-260000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
I0307 10:28:48.198050 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:28:48.198133 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:28:48.198209 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:28:48.198294 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:28:48.198376 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:28:48.295688 7018 command_runner.go:130] > node/multinode-260000-m02 cordoned
I0307 10:28:51.318733 7018 command_runner.go:130] > pod "busybox-6b86dd6d48-dmrds" has DeletionTimestamp older than 1 seconds, skipping
I0307 10:28:51.318748 7018 command_runner.go:130] > node/multinode-260000-m02 drained
I0307 10:28:51.319712 7018 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
I0307 10:28:51.319724 7018 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-z6kqp, kube-system/kube-proxy-pxshj
I0307 10:28:51.319743 7018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl drain multinode-260000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.121678108s)
I0307 10:28:51.319753 7018 node.go:109] successfully drained node "m02"
I0307 10:28:51.320044 7018 loader.go:373] Config loaded from file: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:28:51.320243 7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 10:28:51.320537 7018 request.go:1171] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
I0307 10:28:51.320569 7018 round_trippers.go:463] DELETE https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:51.320574 7018 round_trippers.go:469] Request Headers:
I0307 10:28:51.320580 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:51.320586 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:51.320592 7018 round_trippers.go:473] Content-Type: application/json
I0307 10:28:51.323598 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:51.323609 7018 round_trippers.go:577] Response Headers:
I0307 10:28:51.323615 7018 round_trippers.go:580] Audit-Id: d4c330be-b2e7-4781-aecc-cf162ed512f1
I0307 10:28:51.323620 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:51.323625 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:51.323630 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:51.323636 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:51.323643 7018 round_trippers.go:580] Content-Length: 171
I0307 10:28:51.323649 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:51 GMT
I0307 10:28:51.323663 7018 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-260000-m02","kind":"nodes","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b"}}
I0307 10:28:51.323690 7018 node.go:125] successfully deleted node "m02"
I0307 10:28:51.323697 7018 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0307 10:28:51.323715 7018 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0307 10:28:51.323731 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-260000-m02"
I0307 10:28:51.374604 7018 command_runner.go:130] ! W0307 18:28:51.510767 1198 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:51.505076 7018 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0307 10:28:53.147207 7018 command_runner.go:130] > [preflight] Running pre-flight checks
I0307 10:28:53.147229 7018 command_runner.go:130] > [preflight] Reading configuration from the cluster...
I0307 10:28:53.147240 7018 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0307 10:28:53.147249 7018 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0307 10:28:53.147258 7018 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0307 10:28:53.147266 7018 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0307 10:28:53.147275 7018 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0307 10:28:53.147285 7018 command_runner.go:130] > This node has joined the cluster:
I0307 10:28:53.147294 7018 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
I0307 10:28:53.147304 7018 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
I0307 10:28:53.147313 7018 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
I0307 10:28:53.147327 7018 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-260000-m02": (1.823577721s)
I0307 10:28:53.147343 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0307 10:28:53.256139 7018 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
I0307 10:28:53.347575 7018 start.go:303] JoinCluster complete in 5.245201975s
I0307 10:28:53.347588 7018 cni.go:84] Creating CNI manager for ""
I0307 10:28:53.347594 7018 cni.go:136] 3 nodes found, recommending kindnet
I0307 10:28:53.347676 7018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0307 10:28:53.350863 7018 command_runner.go:130] > File: /opt/cni/bin/portmap
I0307 10:28:53.350874 7018 command_runner.go:130] > Size: 2798344 Blocks: 5472 IO Block: 4096 regular file
I0307 10:28:53.350882 7018 command_runner.go:130] > Device: 11h/17d Inode: 3542 Links: 1
I0307 10:28:53.350888 7018 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0307 10:28:53.350895 7018 command_runner.go:130] > Access: 2023-03-07 18:27:25.800133630 +0000
I0307 10:28:53.350899 7018 command_runner.go:130] > Modify: 2023-02-24 23:58:49.000000000 +0000
I0307 10:28:53.350904 7018 command_runner.go:130] > Change: 2023-03-07 18:27:24.520133706 +0000
I0307 10:28:53.350907 7018 command_runner.go:130] > Birth: -
I0307 10:28:53.350976 7018 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
I0307 10:28:53.350986 7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0307 10:28:53.365774 7018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0307 10:28:53.573328 7018 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0307 10:28:53.576007 7018 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0307 10:28:53.577626 7018 command_runner.go:130] > serviceaccount/kindnet unchanged
I0307 10:28:53.586569 7018 command_runner.go:130] > daemonset.apps/kindnet configured
I0307 10:28:53.588317 7018 loader.go:373] Config loaded from file: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:28:53.588503 7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 10:28:53.588731 7018 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0307 10:28:53.588737 7018 round_trippers.go:469] Request Headers:
I0307 10:28:53.588744 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:53.588750 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:53.590037 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:53.590045 7018 round_trippers.go:577] Response Headers:
I0307 10:28:53.590053 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:53.590058 7018 round_trippers.go:580] Content-Length: 292
I0307 10:28:53.590065 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:53 GMT
I0307 10:28:53.590074 7018 round_trippers.go:580] Audit-Id: 09b51ea0-529c-4d47-a052-cef6398d810c
I0307 10:28:53.590096 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:53.590105 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:53.590110 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:53.590121 7018 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9058bb7-5525-4245-a92a-3b0f0144c5d4","resourceVersion":"1155","creationTimestamp":"2023-03-07T18:18:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0307 10:28:53.590164 7018 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-260000" context rescaled to 1 replicas
I0307 10:28:53.590178 7018 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0307 10:28:53.633568 7018 out.go:177] * Verifying Kubernetes components...
I0307 10:28:53.691468 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 10:28:53.703497 7018 loader.go:373] Config loaded from file: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:28:53.703698 7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 10:28:53.703918 7018 node_ready.go:35] waiting up to 6m0s for node "multinode-260000-m02" to be "Ready" ...
I0307 10:28:53.703963 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:53.703968 7018 round_trippers.go:469] Request Headers:
I0307 10:28:53.703974 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:53.703981 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:53.705420 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:53.705433 7018 round_trippers.go:577] Response Headers:
I0307 10:28:53.705439 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:53.705445 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:53 GMT
I0307 10:28:53.705455 7018 round_trippers.go:580] Audit-Id: e2d373c1-190f-45e0-b9cf-3d8d054fb1e3
I0307 10:28:53.705460 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:53.705465 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:53.705470 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:53.705557 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:54.205959 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:54.205976 7018 round_trippers.go:469] Request Headers:
I0307 10:28:54.205988 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:54.205995 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:54.208023 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:54.208036 7018 round_trippers.go:577] Response Headers:
I0307 10:28:54.208042 7018 round_trippers.go:580] Audit-Id: 162bfd38-128d-4c94-8620-4dd73b77dd1a
I0307 10:28:54.208050 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:54.208055 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:54.208065 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:54.208073 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:54.208080 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:54 GMT
I0307 10:28:54.208268 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:54.706066 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:54.706077 7018 round_trippers.go:469] Request Headers:
I0307 10:28:54.706084 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:54.706089 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:54.708076 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:54.708088 7018 round_trippers.go:577] Response Headers:
I0307 10:28:54.708095 7018 round_trippers.go:580] Audit-Id: dd80323e-e17e-4577-b133-2911fcce9fc1
I0307 10:28:54.708100 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:54.708105 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:54.708110 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:54.708115 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:54.708120 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:54 GMT
I0307 10:28:54.708207 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:55.206158 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:55.206172 7018 round_trippers.go:469] Request Headers:
I0307 10:28:55.206179 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:55.206184 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:55.207805 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:55.207815 7018 round_trippers.go:577] Response Headers:
I0307 10:28:55.207820 7018 round_trippers.go:580] Audit-Id: 9200c148-32d8-4985-98ec-72d4b636ae7e
I0307 10:28:55.207825 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:55.207831 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:55.207835 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:55.207840 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:55.207845 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:55 GMT
I0307 10:28:55.207923 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:55.706104 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:55.706119 7018 round_trippers.go:469] Request Headers:
I0307 10:28:55.706125 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:55.706131 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:55.707769 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:55.707783 7018 round_trippers.go:577] Response Headers:
I0307 10:28:55.707791 7018 round_trippers.go:580] Audit-Id: 0773193b-a44b-4173-a89e-1b4397280289
I0307 10:28:55.707797 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:55.707803 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:55.707808 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:55.707813 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:55.707818 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:55 GMT
I0307 10:28:55.707892 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:55.708076 7018 node_ready.go:58] node "multinode-260000-m02" has status "Ready":"False"
I0307 10:28:56.205958 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:56.205974 7018 round_trippers.go:469] Request Headers:
I0307 10:28:56.205981 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:56.205986 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:56.207374 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:56.207390 7018 round_trippers.go:577] Response Headers:
I0307 10:28:56.207399 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:56.207406 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:56.207412 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:56 GMT
I0307 10:28:56.207418 7018 round_trippers.go:580] Audit-Id: 0b890c7d-2626-4ab5-8e75-3a16b9eecf54
I0307 10:28:56.207427 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:56.207433 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:56.207515 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:56.705900 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:56.705916 7018 round_trippers.go:469] Request Headers:
I0307 10:28:56.705923 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:56.705928 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:56.707741 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:56.707756 7018 round_trippers.go:577] Response Headers:
I0307 10:28:56.707766 7018 round_trippers.go:580] Audit-Id: 0e59b396-e7bf-4b72-b74c-a01f645f9864
I0307 10:28:56.707778 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:56.707804 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:56.707821 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:56.707834 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:56.707842 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:56 GMT
I0307 10:28:56.707912 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
I0307 10:28:57.206205 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:57.206216 7018 round_trippers.go:469] Request Headers:
I0307 10:28:57.206228 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:57.206234 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:57.207878 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:57.207889 7018 round_trippers.go:577] Response Headers:
I0307 10:28:57.207894 7018 round_trippers.go:580] Audit-Id: a4dcdc28-4a89-41fc-a490-5614c72a2f7c
I0307 10:28:57.207900 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:57.207905 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:57.207913 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:57.207918 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:57.207923 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:57 GMT
I0307 10:28:57.208010 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
I0307 10:28:57.706332 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:57.727379 7018 round_trippers.go:469] Request Headers:
I0307 10:28:57.727424 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:57.727437 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:57.731183 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:57.731198 7018 round_trippers.go:577] Response Headers:
I0307 10:28:57.731206 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:57 GMT
I0307 10:28:57.731221 7018 round_trippers.go:580] Audit-Id: f535ff1c-e3e0-4a4e-acf9-6dabcd316387
I0307 10:28:57.731231 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:57.731241 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:57.731249 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:57.731255 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:57.731338 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
I0307 10:28:57.731568 7018 node_ready.go:58] node "multinode-260000-m02" has status "Ready":"False"
I0307 10:28:58.206943 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:58.206954 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.206960 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.206966 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.208597 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.208612 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.208617 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.208623 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.208628 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.208633 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.208638 7018 round_trippers.go:580] Audit-Id: 14bb95b4-52c5-49f6-baee-19c30e38be33
I0307 10:28:58.208643 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.208733 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1235","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4619 chars]
I0307 10:28:58.208922 7018 node_ready.go:49] node "multinode-260000-m02" has status "Ready":"True"
I0307 10:28:58.208932 7018 node_ready.go:38] duration metric: took 4.5049847s waiting for node "multinode-260000-m02" to be "Ready" ...
I0307 10:28:58.208937 7018 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:58.208966 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:58.208970 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.208977 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.208983 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.211168 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:58.211181 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.211186 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.211192 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.211200 7018 round_trippers.go:580] Audit-Id: 9e29ae0f-c0b8-46e2-b2ef-ac7c8b7cd885
I0307 10:28:58.211206 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.211211 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.211218 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.212031 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1235"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83248 chars]
I0307 10:28:58.213928 7018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.213959 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:58.213966 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.213972 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.213977 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.215266 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.215275 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.215280 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.215285 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.215299 7018 round_trippers.go:580] Audit-Id: da0297af-ddf8-40bb-ba7e-ee7c25d1d50b
I0307 10:28:58.215307 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.215315 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.215322 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.215421 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6489 chars]
I0307 10:28:58.215654 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.215660 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.215667 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.215673 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.217001 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.217011 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.217018 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.217023 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.217030 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.217035 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.217044 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.217052 7018 round_trippers.go:580] Audit-Id: bcd5819d-b6c4-402c-84d8-8b34af188a85
I0307 10:28:58.217231 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:58.217408 7018 pod_ready.go:92] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:58.217413 7018 pod_ready.go:81] duration metric: took 3.477588ms waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.217418 7018 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.217449 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
I0307 10:28:58.217455 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.217463 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.217469 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.218541 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.218548 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.218553 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.218559 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.218569 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.218574 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.218579 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.218584 7018 round_trippers.go:580] Audit-Id: 3ecf0cc4-5524-4969-bf64-78cbfa7bcc64
I0307 10:28:58.218670 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"1080","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6056 chars]
I0307 10:28:58.218878 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.218884 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.218890 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.218895 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.220222 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.220239 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.220246 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.220251 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.220256 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.220262 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.220268 7018 round_trippers.go:580] Audit-Id: 16035865-fbff-46a4-82b6-1d4dc225f856
I0307 10:28:58.220272 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.220340 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:58.220511 7018 pod_ready.go:92] pod "etcd-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:58.220516 7018 pod_ready.go:81] duration metric: took 3.092542ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.220524 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.220551 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
I0307 10:28:58.220555 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.220561 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.220566 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.221715 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.221722 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.221727 7018 round_trippers.go:580] Audit-Id: db547fd7-e43b-49f4-9206-870682ba8ead
I0307 10:28:58.221738 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.221744 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.221749 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.221754 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.221769 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.221904 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"1143","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7591 chars]
I0307 10:28:58.222136 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.222142 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.222148 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.222153 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.223204 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.223213 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.223218 7018 round_trippers.go:580] Audit-Id: af2553b3-7312-4d2a-a007-6b34fbaa60fe
I0307 10:28:58.223223 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.223229 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.223233 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.223239 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.223243 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.223402 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:58.223567 7018 pod_ready.go:92] pod "kube-apiserver-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:58.223572 7018 pod_ready.go:81] duration metric: took 3.043676ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.223578 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.223603 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
I0307 10:28:58.223607 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.223624 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.223632 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.224832 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.224840 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.224845 7018 round_trippers.go:580] Audit-Id: 08c9fdf6-3267-4e2e-935f-9c4e84582ec5
I0307 10:28:58.224850 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.224859 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.224864 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.224869 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.224874 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.225199 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1131","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7159 chars]
I0307 10:28:58.225429 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.225437 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.225443 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.225449 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.226687 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.226694 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.226699 7018 round_trippers.go:580] Audit-Id: 7796790d-620c-401a-9f3a-b4ce8b9acc5f
I0307 10:28:58.226704 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.226710 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.226714 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.226719 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.226725 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.226885 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:58.227057 7018 pod_ready.go:92] pod "kube-controller-manager-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:58.227062 7018 pod_ready.go:81] duration metric: took 3.479487ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.227067 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.407059 7018 request.go:622] Waited for 179.951206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
I0307 10:28:58.407094 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
I0307 10:28:58.407101 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.407154 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.407160 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.408789 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.408801 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.408809 7018 round_trippers.go:580] Audit-Id: c45ed864-b7ed-4df5-a14e-1c1a9c154846
I0307 10:28:58.408817 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.408824 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.408829 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.408834 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.408845 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.409069 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"1061","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
I0307 10:28:58.608673 7018 request.go:622] Waited for 199.329269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.608848 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.608860 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.608872 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.608882 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.611654 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:58.611670 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.611677 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.611684 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.611692 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.611701 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.611709 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.611715 7018 round_trippers.go:580] Audit-Id: 76524fea-611e-49f8-bb7e-5eb3dc168072
I0307 10:28:58.611840 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:58.612099 7018 pod_ready.go:92] pod "kube-proxy-8qwhq" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:58.612108 7018 pod_ready.go:81] duration metric: took 385.031837ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.612116 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.808367 7018 request.go:622] Waited for 196.171802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:58.808492 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:58.808504 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.808517 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.808529 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.811399 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:58.811415 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.811423 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.811429 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.811436 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.811442 7018 round_trippers.go:580] Audit-Id: 3bbb7a3c-520d-4a16-9e4e-62fab5920986
I0307 10:28:58.811449 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.811455 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.811559 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"1218","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I0307 10:28:59.008406 7018 request.go:622] Waited for 196.512217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:59.008597 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:59.008608 7018 round_trippers.go:469] Request Headers:
I0307 10:28:59.008621 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:59.008631 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:59.011231 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:59.011250 7018 round_trippers.go:577] Response Headers:
I0307 10:28:59.011258 7018 round_trippers.go:580] Audit-Id: a7a3df8f-11e9-4890-88c0-bd4fb1da521d
I0307 10:28:59.011266 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:59.011273 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:59.011280 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:59.011289 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:59.011295 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:59 GMT
I0307 10:28:59.011388 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1235","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4619 chars]
I0307 10:28:59.011635 7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:59.011645 7018 pod_ready.go:81] duration metric: took 399.518428ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:59.011652 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:59.208322 7018 request.go:622] Waited for 196.555002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:59.208407 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:59.208417 7018 round_trippers.go:469] Request Headers:
I0307 10:28:59.208432 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:59.208444 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:59.211802 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:59.211825 7018 round_trippers.go:577] Response Headers:
I0307 10:28:59.211836 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:59.211865 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:59 GMT
I0307 10:28:59.211875 7018 round_trippers.go:580] Audit-Id: 80279da3-3584-4856-89d4-205b357cfc2e
I0307 10:28:59.211901 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:59.211908 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:59.211916 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:59.212031 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I0307 10:28:59.407671 7018 request.go:622] Waited for 195.295612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:59.407782 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:59.407790 7018 round_trippers.go:469] Request Headers:
I0307 10:28:59.407799 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:59.407807 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:59.409534 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:59.409543 7018 round_trippers.go:577] Response Headers:
I0307 10:28:59.409549 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:59 GMT
I0307 10:28:59.409562 7018 round_trippers.go:580] Audit-Id: dced968d-8259-48a8-a369-67bdece8d0ff
I0307 10:28:59.409577 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:59.409586 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:59.409591 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:59.409597 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:59.409645 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1109","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4330 chars]
I0307 10:28:59.409824 7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:59.409830 7018 pod_ready.go:81] duration metric: took 398.16179ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:59.409836 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:59.607367 7018 request.go:622] Waited for 197.479712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:59.607426 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:59.607435 7018 round_trippers.go:469] Request Headers:
I0307 10:28:59.607535 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:59.607549 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:59.610313 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:59.610332 7018 round_trippers.go:577] Response Headers:
I0307 10:28:59.610344 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:59 GMT
I0307 10:28:59.610351 7018 round_trippers.go:580] Audit-Id: 831ac5c9-6a6e-4238-9a57-e226e9d7fa9a
I0307 10:28:59.610359 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:59.610366 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:59.610373 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:59.610380 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:59.610482 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1139","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4889 chars]
I0307 10:28:59.807243 7018 request.go:622] Waited for 196.466836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:59.807382 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:59.807393 7018 round_trippers.go:469] Request Headers:
I0307 10:28:59.807405 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:59.807416 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:59.809503 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:59.809522 7018 round_trippers.go:577] Response Headers:
I0307 10:28:59.809534 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:59.809565 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:59 GMT
I0307 10:28:59.809578 7018 round_trippers.go:580] Audit-Id: 0db6ab63-4a4e-453d-ac64-1584164a0c7d
I0307 10:28:59.809586 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:59.809593 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:59.809600 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:59.809729 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:59.810013 7018 pod_ready.go:92] pod "kube-scheduler-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:59.810022 7018 pod_ready.go:81] duration metric: took 400.179443ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:59.810030 7018 pod_ready.go:38] duration metric: took 1.60107891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:59.810045 7018 system_svc.go:44] waiting for kubelet service to be running ....
I0307 10:28:59.810114 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 10:28:59.818885 7018 system_svc.go:56] duration metric: took 8.836426ms WaitForService to wait for kubelet.
I0307 10:28:59.818896 7018 kubeadm.go:578] duration metric: took 6.228675231s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0307 10:28:59.818910 7018 node_conditions.go:102] verifying NodePressure condition ...
I0307 10:29:00.007159 7018 request.go:622] Waited for 188.194062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes
I0307 10:29:00.007207 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
I0307 10:29:00.007270 7018 round_trippers.go:469] Request Headers:
I0307 10:29:00.007282 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:29:00.007294 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:29:00.010101 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:29:00.010120 7018 round_trippers.go:577] Response Headers:
I0307 10:29:00.010131 7018 round_trippers.go:580] Audit-Id: 230c0ab3-666e-4727-a5a5-c4ebee390789
I0307 10:29:00.010139 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:29:00.010146 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:29:00.010153 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:29:00.010162 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:29:00.010174 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:29:00 GMT
I0307 10:29:00.010474 7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1235"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16317 chars]
I0307 10:29:00.011046 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:29:00.011058 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:29:00.011066 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:29:00.011071 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:29:00.011075 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:29:00.011082 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:29:00.011087 7018 node_conditions.go:105] duration metric: took 192.17207ms to run NodePressure ...
I0307 10:29:00.011096 7018 start.go:228] waiting for startup goroutines ...
I0307 10:29:00.011118 7018 start.go:242] writing updated cluster config ...
I0307 10:29:00.011876 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:29:00.012002 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:29:00.054733 7018 out.go:177] * Starting worker node multinode-260000-m03 in cluster multinode-260000
I0307 10:29:00.075685 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:29:00.075744 7018 cache.go:57] Caching tarball of preloaded images
I0307 10:29:00.075937 7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0307 10:29:00.075956 7018 cache.go:60] Finished verifying existence of preloaded tar for v1.26.2 on docker
I0307 10:29:00.076097 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:29:00.077109 7018 cache.go:193] Successfully downloaded all kic artifacts
I0307 10:29:00.077151 7018 start.go:364] acquiring machines lock for multinode-260000-m03: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 10:29:00.077243 7018 start.go:368] acquired machines lock for "multinode-260000-m03" in 73.572µs
I0307 10:29:00.077280 7018 start.go:96] Skipping create...Using existing machine configuration
I0307 10:29:00.077288 7018 fix.go:55] fixHost starting: m03
I0307 10:29:00.077721 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:29:00.077794 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:29:00.085146 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51690
I0307 10:29:00.085469 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:29:00.085788 7018 main.go:141] libmachine: Using API Version 1
I0307 10:29:00.085809 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:29:00.086053 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:29:00.086177 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:00.086254 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetState
I0307 10:29:00.086348 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:29:00.086412 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid from json: 6959
I0307 10:29:00.087210 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid 6959 missing from process table
I0307 10:29:00.087228 7018 fix.go:103] recreateIfNeeded on multinode-260000-m03: state=Stopped err=<nil>
I0307 10:29:00.087236 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
W0307 10:29:00.087313 7018 fix.go:129] unexpected machine state, will restart: <nil>
I0307 10:29:00.108838 7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000-m03" ...
I0307 10:29:00.150753 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .Start
I0307 10:29:00.151097 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:29:00.151124 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid
I0307 10:29:00.151193 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Using UUID 79b2bd18-bd15-11ed-8f77-149d997fca88
I0307 10:29:00.180096 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Generated MAC 12:aa:e8:53:6e:6b
I0307 10:29:00.180120 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
I0307 10:29:00.180266 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"79b2bd18-bd15-11ed-8f77-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002c11a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
I0307 10:29:00.180309 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"79b2bd18-bd15-11ed-8f77-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002c11a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
I0307 10:29:00.180345 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "79b2bd18-bd15-11ed-8f77-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/multinode-260000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage,/Users/j
enkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
I0307 10:29:00.180370 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 79b2bd18-bd15-11ed-8f77-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/multinode-260000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/mult
inode-260000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
I0307 10:29:00.180383 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0307 10:29:00.181671 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Pid is 7128
I0307 10:29:00.182013 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Attempt 0
I0307 10:29:00.182028 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:29:00.182112 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid from json: 7128
I0307 10:29:00.183032 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Searching for 12:aa:e8:53:6e:6b in /var/db/dhcpd_leases ...
I0307 10:29:00.183093 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Found 14 entries in /var/db/dhcpd_leases!
I0307 10:29:00.183123 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d3d8}
I0307 10:29:00.183132 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d38e}
I0307 10:29:00.183144 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
I0307 10:29:00.183153 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Found match: 12:aa:e8:53:6e:6b
I0307 10:29:00.183173 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | IP: 192.168.64.15
I0307 10:29:00.183209 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetConfigRaw
I0307 10:29:00.183787 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
I0307 10:29:00.183966 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:29:00.184309 7018 machine.go:88] provisioning docker machine ...
I0307 10:29:00.184319 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:00.184441 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
I0307 10:29:00.184532 7018 buildroot.go:166] provisioning hostname "multinode-260000-m03"
I0307 10:29:00.184543 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
I0307 10:29:00.184630 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:00.184704 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:00.184784 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:00.184866 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:00.184944 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:00.185055 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:00.185361 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:00.185370 7018 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-260000-m03 && echo "multinode-260000-m03" | sudo tee /etc/hostname
I0307 10:29:00.188080 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0307 10:29:00.195643 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0307 10:29:00.196371 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:29:00.196384 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:29:00.196392 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:29:00.196404 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:29:00.552977 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0307 10:29:00.552995 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0307 10:29:00.657061 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:29:00.657081 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:29:00.657091 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:29:00.657102 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:29:00.657942 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0307 10:29:00.657953 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0307 10:29:05.166903 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0307 10:29:05.166935 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0307 10:29:05.166942 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0307 10:29:11.261985 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000-m03
I0307 10:29:11.262003 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.262135 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:11.262237 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.262323 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.262404 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:11.262539 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:11.262858 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:11.262870 7018 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-260000-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-260000-m03' | sudo tee -a /etc/hosts;
fi
fi
I0307 10:29:11.336626 7018 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 10:29:11.336642 7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
I0307 10:29:11.336650 7018 buildroot.go:174] setting up certificates
I0307 10:29:11.336658 7018 provision.go:83] configureAuth start
I0307 10:29:11.336666 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
I0307 10:29:11.336795 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
I0307 10:29:11.336894 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.336973 7018 provision.go:138] copyHostCerts
I0307 10:29:11.337009 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:29:11.337059 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
I0307 10:29:11.337064 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:29:11.337174 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
I0307 10:29:11.337363 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:29:11.337395 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
I0307 10:29:11.337400 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:29:11.337460 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
I0307 10:29:11.337578 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:29:11.337610 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
I0307 10:29:11.337615 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:29:11.337670 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
I0307 10:29:11.337789 7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000-m03 san=[192.168.64.15 192.168.64.15 localhost 127.0.0.1 minikube multinode-260000-m03]
I0307 10:29:11.427111 7018 provision.go:172] copyRemoteCerts
I0307 10:29:11.427165 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 10:29:11.427179 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.427324 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:11.427419 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.427541 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:11.427623 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
I0307 10:29:11.465606 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0307 10:29:11.465676 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0307 10:29:11.481351 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
I0307 10:29:11.481417 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0307 10:29:11.496933 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0307 10:29:11.496996 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0307 10:29:11.512347 7018 provision.go:86] duration metric: configureAuth took 175.680754ms
I0307 10:29:11.512360 7018 buildroot.go:189] setting minikube options for container-runtime
I0307 10:29:11.512526 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:29:11.512539 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:11.512663 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.512758 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:11.512840 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.512918 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.512998 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:11.513100 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:11.513391 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:11.513399 7018 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0307 10:29:11.579311 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0307 10:29:11.579323 7018 buildroot.go:70] root file system type: tmpfs
I0307 10:29:11.579401 7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0307 10:29:11.579411 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.579540 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:11.579641 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.579740 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.579829 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:11.579956 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:11.580270 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:11.580316 7018 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.64.12"
Environment="NO_PROXY=192.168.64.12,192.168.64.13"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0307 10:29:11.652702 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.64.12
Environment=NO_PROXY=192.168.64.12,192.168.64.13
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0307 10:29:11.652720 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.652848 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:11.652922 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.653006 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.653098 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:11.653250 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:11.653560 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:11.653573 7018 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0307 10:29:12.175360 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0307 10:29:12.175374 7018 machine.go:91] provisioned docker machine in 11.991002684s
I0307 10:29:12.175381 7018 start.go:300] post-start starting for "multinode-260000-m03" (driver="hyperkit")
I0307 10:29:12.175386 7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 10:29:12.175396 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:12.175581 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 10:29:12.175596 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:12.175686 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:12.175759 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:12.175827 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:12.175912 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
I0307 10:29:12.214369 7018 ssh_runner.go:195] Run: cat /etc/os-release
I0307 10:29:12.216755 7018 command_runner.go:130] > NAME=Buildroot
I0307 10:29:12.216767 7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
I0307 10:29:12.216773 7018 command_runner.go:130] > ID=buildroot
I0307 10:29:12.216793 7018 command_runner.go:130] > VERSION_ID=2021.02.12
I0307 10:29:12.216800 7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0307 10:29:12.216963 7018 info.go:137] Remote host: Buildroot 2021.02.12
I0307 10:29:12.216972 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
I0307 10:29:12.217057 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
I0307 10:29:12.217200 7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
I0307 10:29:12.217206 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
I0307 10:29:12.217370 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0307 10:29:12.223606 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
I0307 10:29:12.239878 7018 start.go:303] post-start completed in 64.487773ms
I0307 10:29:12.239896 7018 fix.go:57] fixHost completed within 12.162546961s
I0307 10:29:12.239910 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:12.240038 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:12.240131 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:12.240212 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:12.240290 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:12.240409 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:12.240714 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:12.240722 7018 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0307 10:29:12.305514 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213752.437212482
I0307 10:29:12.305525 7018 fix.go:207] guest clock: 1678213752.437212482
I0307 10:29:12.305531 7018 fix.go:220] Guest: 2023-03-07 10:29:12.437212482 -0800 PST Remote: 2023-03-07 10:29:12.239899 -0800 PST m=+114.574278242 (delta=197.313482ms)
I0307 10:29:12.305540 7018 fix.go:191] guest clock delta is within tolerance: 197.313482ms
I0307 10:29:12.305543 7018 start.go:83] releasing machines lock for "multinode-260000-m03", held for 12.228234634s
I0307 10:29:12.305562 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:12.305681 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
I0307 10:29:12.327827 7018 out.go:177] * Found network options:
I0307 10:29:12.349261 7018 out.go:177] - NO_PROXY=192.168.64.12,192.168.64.13
W0307 10:29:12.371206 7018 proxy.go:119] fail to check proxy env: Error ip not in block
W0307 10:29:12.371232 7018 proxy.go:119] fail to check proxy env: Error ip not in block
I0307 10:29:12.371252 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:12.372006 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:12.372213 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:12.372340 7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0307 10:29:12.372393 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
W0307 10:29:12.372424 7018 proxy.go:119] fail to check proxy env: Error ip not in block
W0307 10:29:12.372448 7018 proxy.go:119] fail to check proxy env: Error ip not in block
I0307 10:29:12.372546 7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0307 10:29:12.372566 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:12.372582 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:12.372778 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:12.372789 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:12.372944 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:12.372988 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:12.373142 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
I0307 10:29:12.373168 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:12.373363 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
I0307 10:29:12.410014 7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0307 10:29:12.410159 7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0307 10:29:12.410222 7018 ssh_runner.go:195] Run: which cri-dockerd
I0307 10:29:12.452473 7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0307 10:29:12.452552 7018 command_runner.go:130] > /usr/bin/cri-dockerd
I0307 10:29:12.452679 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0307 10:29:12.459245 7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0307 10:29:12.470219 7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 10:29:12.486201 7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0307 10:29:12.486242 7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0307 10:29:12.486250 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:29:12.486346 7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 10:29:12.502691 7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
I0307 10:29:12.502703 7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
I0307 10:29:12.502708 7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
I0307 10:29:12.502712 7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
I0307 10:29:12.502716 7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
I0307 10:29:12.502719 7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0307 10:29:12.502723 7018 command_runner.go:130] > registry.k8s.io/pause:3.9
I0307 10:29:12.502728 7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0307 10:29:12.502732 7018 command_runner.go:130] > registry.k8s.io/pause:3.6
I0307 10:29:12.502737 7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0307 10:29:12.503864 7018 docker.go:630] Got preloaded images: -- stdout --
kindest/kindnetd:v20230227-15197099
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 10:29:12.503874 7018 docker.go:560] Images already preloaded, skipping extraction
I0307 10:29:12.503880 7018 start.go:485] detecting cgroup driver to use...
I0307 10:29:12.503940 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:29:12.523327 7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0307 10:29:12.523340 7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0307 10:29:12.524671 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0307 10:29:12.536597 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 10:29:12.544140 7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0307 10:29:12.544193 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0307 10:29:12.550489 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:29:12.556842 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 10:29:12.563095 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:29:12.569445 7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 10:29:12.575946 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 10:29:12.582556 7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 10:29:12.588055 7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0307 10:29:12.588181 7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 10:29:12.594025 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:29:12.673337 7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 10:29:12.685510 7018 start.go:485] detecting cgroup driver to use...
I0307 10:29:12.685584 7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0307 10:29:12.695059 7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0307 10:29:12.696323 7018 command_runner.go:130] > [Unit]
I0307 10:29:12.696352 7018 command_runner.go:130] > Description=Docker Application Container Engine
I0307 10:29:12.696362 7018 command_runner.go:130] > Documentation=https://docs.docker.com
I0307 10:29:12.696367 7018 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0307 10:29:12.696371 7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0307 10:29:12.696375 7018 command_runner.go:130] > StartLimitBurst=3
I0307 10:29:12.696382 7018 command_runner.go:130] > StartLimitIntervalSec=60
I0307 10:29:12.696388 7018 command_runner.go:130] > [Service]
I0307 10:29:12.696393 7018 command_runner.go:130] > Type=notify
I0307 10:29:12.696397 7018 command_runner.go:130] > Restart=on-failure
I0307 10:29:12.696402 7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12
I0307 10:29:12.696406 7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12,192.168.64.13
I0307 10:29:12.696413 7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0307 10:29:12.696422 7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0307 10:29:12.696428 7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0307 10:29:12.696433 7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0307 10:29:12.696439 7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0307 10:29:12.696445 7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0307 10:29:12.696454 7018 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0307 10:29:12.696462 7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0307 10:29:12.696468 7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0307 10:29:12.696471 7018 command_runner.go:130] > ExecStart=
I0307 10:29:12.696485 7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
I0307 10:29:12.696489 7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0307 10:29:12.696497 7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0307 10:29:12.696503 7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0307 10:29:12.696506 7018 command_runner.go:130] > LimitNOFILE=infinity
I0307 10:29:12.696510 7018 command_runner.go:130] > LimitNPROC=infinity
I0307 10:29:12.696514 7018 command_runner.go:130] > LimitCORE=infinity
I0307 10:29:12.696519 7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0307 10:29:12.696524 7018 command_runner.go:130] > # Only systemd 226 and above support this version.
I0307 10:29:12.696527 7018 command_runner.go:130] > TasksMax=infinity
I0307 10:29:12.696531 7018 command_runner.go:130] > TimeoutStartSec=0
I0307 10:29:12.696536 7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0307 10:29:12.696540 7018 command_runner.go:130] > Delegate=yes
I0307 10:29:12.696549 7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0307 10:29:12.696553 7018 command_runner.go:130] > KillMode=process
I0307 10:29:12.696557 7018 command_runner.go:130] > [Install]
I0307 10:29:12.696562 7018 command_runner.go:130] > WantedBy=multi-user.target
I0307 10:29:12.696635 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:29:12.705902 7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0307 10:29:12.738895 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:29:12.747844 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:29:12.756435 7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0307 10:29:12.775075 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:29:12.783647 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:29:12.795348 7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:29:12.795358 7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:29:12.795646 7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0307 10:29:12.877113 7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0307 10:29:12.966218 7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0307 10:29:12.966234 7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0307 10:29:12.977829 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:29:13.058533 7018 ssh_runner.go:195] Run: sudo systemctl restart docker
I0307 10:30:14.087064 7018 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
I0307 10:30:14.087078 7018 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
I0307 10:30:14.087168 7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.028339517s)
I0307 10:30:14.108918 7018 out.go:177]
W0307 10:30:14.130829 7018 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
W0307 10:30:14.130853 7018 out.go:239] *
*
W0307 10:30:14.131956 7018 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 10:30:14.211985 7018 out.go:177]
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-260000" : exit status 90
multinode_test.go:298: (dbg) Run: out/minikube-darwin-amd64 node list -p multinode-260000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-260000 -n multinode-260000
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p multinode-260000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-260000 logs -n 25: (2.872673251s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs:
-- stdout --
*
* ==> Audit <==
* |---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-260000 ssh -n | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-260000 cp multinode-260000-m02:/home/docker/cp-test.txt | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile946595065/001/cp-test_multinode-260000-m02.txt | | | | | |
| ssh | multinode-260000 ssh -n | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-260000 cp multinode-260000-m02:/home/docker/cp-test.txt | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000:/home/docker/cp-test_multinode-260000-m02_multinode-260000.txt | | | | | |
| ssh | multinode-260000 ssh -n | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-260000 ssh -n multinode-260000 sudo cat | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | /home/docker/cp-test_multinode-260000-m02_multinode-260000.txt | | | | | |
| cp | multinode-260000 cp multinode-260000-m02:/home/docker/cp-test.txt | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m03:/home/docker/cp-test_multinode-260000-m02_multinode-260000-m03.txt | | | | | |
| ssh | multinode-260000 ssh -n | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-260000 ssh -n multinode-260000-m03 sudo cat | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | /home/docker/cp-test_multinode-260000-m02_multinode-260000-m03.txt | | | | | |
| cp | multinode-260000 cp testdata/cp-test.txt | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-260000 ssh -n | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-260000 cp multinode-260000-m03:/home/docker/cp-test.txt | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile946595065/001/cp-test_multinode-260000-m03.txt | | | | | |
| ssh | multinode-260000 ssh -n | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-260000 cp multinode-260000-m03:/home/docker/cp-test.txt | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000:/home/docker/cp-test_multinode-260000-m03_multinode-260000.txt | | | | | |
| ssh | multinode-260000 ssh -n | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-260000 ssh -n multinode-260000 sudo cat | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | /home/docker/cp-test_multinode-260000-m03_multinode-260000.txt | | | | | |
| cp | multinode-260000 cp multinode-260000-m03:/home/docker/cp-test.txt | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m02:/home/docker/cp-test_multinode-260000-m03_multinode-260000-m02.txt | | | | | |
| ssh | multinode-260000 ssh -n | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | multinode-260000-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-260000 ssh -n multinode-260000-m02 sudo cat | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | /home/docker/cp-test_multinode-260000-m03_multinode-260000-m02.txt | | | | | |
| node | multinode-260000 node stop m03 | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| node | multinode-260000 node start | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-260000 | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | |
| stop | -p multinode-260000 | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:27 PST |
| start | -p multinode-260000 | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:27 PST | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-260000 | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:30 PST | |
|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/07 10:27:17
Running on machine: MacOS-Agent-4
Binary: Built with gc go1.20.1 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0307 10:27:17.701567 7018 out.go:296] Setting OutFile to fd 1 ...
I0307 10:27:17.701766 7018 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 10:27:17.701771 7018 out.go:309] Setting ErrFile to fd 2...
I0307 10:27:17.701775 7018 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 10:27:17.701881 7018 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
I0307 10:27:17.703156 7018 out.go:303] Setting JSON to false
I0307 10:27:17.723710 7018 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3412,"bootTime":1678210225,"procs":381,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0307 10:27:17.723849 7018 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0307 10:27:17.767920 7018 out.go:177] * [multinode-260000] minikube v1.29.0 on Darwin 13.2.1
I0307 10:27:17.789379 7018 notify.go:220] Checking for updates...
I0307 10:27:17.811044 7018 out.go:177] - MINIKUBE_LOCATION=15985
I0307 10:27:17.832029 7018 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:27:17.853161 7018 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0307 10:27:17.875122 7018 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0307 10:27:17.896016 7018 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
I0307 10:27:17.917197 7018 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0307 10:27:17.939813 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:27:17.939897 7018 driver.go:365] Setting default libvirt URI to qemu:///system
I0307 10:27:17.940536 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:27:17.940612 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:27:17.948145 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51638
I0307 10:27:17.948508 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:27:17.948945 7018 main.go:141] libmachine: Using API Version 1
I0307 10:27:17.948957 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:27:17.949170 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:27:17.949257 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:17.976910 7018 out.go:177] * Using the hyperkit driver based on existing profile
I0307 10:27:18.019030 7018 start.go:296] selected driver: hyperkit
I0307 10:27:18.019085 7018 start.go:857] validating driver "hyperkit" against &{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false
inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP:}
I0307 10:27:18.019304 7018 start.go:868] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0307 10:27:18.019411 7018 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 10:27:18.019612 7018 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15985-3430/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0307 10:27:18.027551 7018 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.29.0
I0307 10:27:18.031921 7018 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:27:18.031941 7018 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0307 10:27:18.034844 7018 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0307 10:27:18.034876 7018 cni.go:84] Creating CNI manager for ""
I0307 10:27:18.034887 7018 cni.go:136] 3 nodes found, recommending kindnet
I0307 10:27:18.034896 7018 start_flags.go:319] config:
{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false
kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 10:27:18.035029 7018 iso.go:125] acquiring lock: {Name:mk7e0ac9e85418e0580033b84b7097185a725e89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 10:27:18.076950 7018 out.go:177] * Starting control plane node multinode-260000 in cluster multinode-260000
I0307 10:27:18.098026 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:27:18.098116 7018 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
I0307 10:27:18.098148 7018 cache.go:57] Caching tarball of preloaded images
I0307 10:27:18.098313 7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0307 10:27:18.098333 7018 cache.go:60] Finished verifying existence of preloaded tar for v1.26.2 on docker
I0307 10:27:18.098530 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:27:18.099358 7018 cache.go:193] Successfully downloaded all kic artifacts
I0307 10:27:18.099407 7018 start.go:364] acquiring machines lock for multinode-260000: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 10:27:18.099512 7018 start.go:368] acquired machines lock for "multinode-260000" in 86.293µs
I0307 10:27:18.099554 7018 start.go:96] Skipping create...Using existing machine configuration
I0307 10:27:18.099566 7018 fix.go:55] fixHost starting:
I0307 10:27:18.100062 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:27:18.100091 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:27:18.107480 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51640
I0307 10:27:18.107803 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:27:18.108127 7018 main.go:141] libmachine: Using API Version 1
I0307 10:27:18.108137 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:27:18.108326 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:27:18.108443 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:18.108543 7018 main.go:141] libmachine: (multinode-260000) Calling .GetState
I0307 10:27:18.108624 7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:27:18.108709 7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid from json: 6235
I0307 10:27:18.109465 7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid 6235 missing from process table
I0307 10:27:18.109498 7018 fix.go:103] recreateIfNeeded on multinode-260000: state=Stopped err=<nil>
I0307 10:27:18.109518 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
W0307 10:27:18.109599 7018 fix.go:129] unexpected machine state, will restart: <nil>
I0307 10:27:18.130859 7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000" ...
I0307 10:27:18.151952 7018 main.go:141] libmachine: (multinode-260000) Calling .Start
I0307 10:27:18.152162 7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:27:18.152193 7018 main.go:141] libmachine: (multinode-260000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid
I0307 10:27:18.153359 7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid 6235 missing from process table
I0307 10:27:18.153369 7018 main.go:141] libmachine: (multinode-260000) DBG | pid 6235 is in state "Stopped"
I0307 10:27:18.153384 7018 main.go:141] libmachine: (multinode-260000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid...
I0307 10:27:18.153520 7018 main.go:141] libmachine: (multinode-260000) DBG | Using UUID 6086a850-bd14-11ed-9c3c-149d997fca88
I0307 10:27:18.261699 7018 main.go:141] libmachine: (multinode-260000) DBG | Generated MAC f2:4e:cd:75:18:a7
I0307 10:27:18.261738 7018 main.go:141] libmachine: (multinode-260000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
I0307 10:27:18.261843 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6086a850-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ecbd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
I0307 10:27:18.261893 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6086a850-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ecbd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
I0307 10:27:18.261955 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6086a850-bd14-11ed-9c3c-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/multinode-260000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage,/Users/jenkins/minikube-integration/1598
5-3430/.minikube/machines/multinode-260000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
I0307 10:27:18.262040 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6086a850-bd14-11ed-9c3c-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/multinode-260000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
I0307 10:27:18.262064 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0307 10:27:18.263449 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Pid is 7033
I0307 10:27:18.263845 7018 main.go:141] libmachine: (multinode-260000) DBG | Attempt 0
I0307 10:27:18.263868 7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:27:18.263948 7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid from json: 7033
I0307 10:27:18.265382 7018 main.go:141] libmachine: (multinode-260000) DBG | Searching for f2:4e:cd:75:18:a7 in /var/db/dhcpd_leases ...
I0307 10:27:18.265430 7018 main.go:141] libmachine: (multinode-260000) DBG | Found 14 entries in /var/db/dhcpd_leases!
I0307 10:27:18.265476 7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
I0307 10:27:18.265490 7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
I0307 10:27:18.265519 7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d194}
I0307 10:27:18.265530 7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d15a}
I0307 10:27:18.265540 7018 main.go:141] libmachine: (multinode-260000) DBG | Found match: f2:4e:cd:75:18:a7
I0307 10:27:18.265548 7018 main.go:141] libmachine: (multinode-260000) DBG | IP: 192.168.64.12
I0307 10:27:18.265590 7018 main.go:141] libmachine: (multinode-260000) Calling .GetConfigRaw
I0307 10:27:18.266196 7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
I0307 10:27:18.266384 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:27:18.266657 7018 machine.go:88] provisioning docker machine ...
I0307 10:27:18.266667 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:18.266773 7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
I0307 10:27:18.266878 7018 buildroot.go:166] provisioning hostname "multinode-260000"
I0307 10:27:18.266892 7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
I0307 10:27:18.266989 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:18.267073 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:18.267172 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:18.267250 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:18.267341 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:18.267461 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:18.267830 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:18.267839 7018 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-260000 && echo "multinode-260000" | sudo tee /etc/hostname
I0307 10:27:18.269902 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0307 10:27:18.319277 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0307 10:27:18.319873 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:27:18.319886 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:27:18.319904 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:27:18.319918 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:27:18.674514 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0307 10:27:18.674532 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0307 10:27:18.778516 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:27:18.778535 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:27:18.778566 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:27:18.778585 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:27:18.779423 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0307 10:27:18.779434 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0307 10:27:23.282731 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0307 10:27:23.282756 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0307 10:27:23.282762 7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0307 10:27:53.345501 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000
I0307 10:27:53.345516 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.345641 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:53.345737 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.345814 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.345897 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:53.346017 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:53.346336 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:53.346349 7018 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-260000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000/g' /etc/hosts;
else
echo '127.0.1.1 multinode-260000' | sudo tee -a /etc/hosts;
fi
fi
I0307 10:27:53.408248 7018 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 10:27:53.408267 7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
I0307 10:27:53.408279 7018 buildroot.go:174] setting up certificates
I0307 10:27:53.408288 7018 provision.go:83] configureAuth start
I0307 10:27:53.408298 7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
I0307 10:27:53.408431 7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
I0307 10:27:53.408534 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.408622 7018 provision.go:138] copyHostCerts
I0307 10:27:53.408658 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:27:53.408716 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
I0307 10:27:53.408724 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:27:53.408836 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
I0307 10:27:53.409016 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:27:53.409051 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
I0307 10:27:53.409056 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:27:53.409119 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
I0307 10:27:53.409268 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:27:53.409298 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
I0307 10:27:53.409303 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:27:53.409364 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
I0307 10:27:53.409496 7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000 san=[192.168.64.12 192.168.64.12 localhost 127.0.0.1 minikube multinode-260000]
I0307 10:27:53.471318 7018 provision.go:172] copyRemoteCerts
I0307 10:27:53.471371 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 10:27:53.471386 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.471501 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:53.471590 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.471685 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:53.471784 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:27:53.506343 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0307 10:27:53.506415 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0307 10:27:53.522448 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
I0307 10:27:53.522505 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0307 10:27:53.538178 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0307 10:27:53.538241 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0307 10:27:53.554443 7018 provision.go:86] duration metric: configureAuth took 146.138879ms
I0307 10:27:53.554456 7018 buildroot.go:189] setting minikube options for container-runtime
I0307 10:27:53.554627 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:27:53.554640 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:53.554773 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.554871 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:53.554956 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.555028 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.555105 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:53.555212 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:53.555523 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:53.555532 7018 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0307 10:27:53.611701 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0307 10:27:53.611715 7018 buildroot.go:70] root file system type: tmpfs
I0307 10:27:53.611791 7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0307 10:27:53.611806 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.611930 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:53.612020 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.612103 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.612184 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:53.612317 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:53.612630 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:53.612673 7018 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0307 10:27:53.678288 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0307 10:27:53.678311 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:53.678443 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:53.678532 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.678617 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:53.678712 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:53.678844 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:53.679161 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:53.679175 7018 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0307 10:27:54.321619 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0307 10:27:54.321632 7018 machine.go:91] provisioned docker machine in 36.054802092s
I0307 10:27:54.321643 7018 start.go:300] post-start starting for "multinode-260000" (driver="hyperkit")
I0307 10:27:54.321648 7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 10:27:54.321659 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:54.321839 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 10:27:54.321852 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:54.321961 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:54.322042 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:54.322149 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:54.322246 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:27:54.357925 7018 ssh_runner.go:195] Run: cat /etc/os-release
I0307 10:27:54.360302 7018 command_runner.go:130] > NAME=Buildroot
I0307 10:27:54.360311 7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
I0307 10:27:54.360321 7018 command_runner.go:130] > ID=buildroot
I0307 10:27:54.360325 7018 command_runner.go:130] > VERSION_ID=2021.02.12
I0307 10:27:54.360330 7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0307 10:27:54.360498 7018 info.go:137] Remote host: Buildroot 2021.02.12
I0307 10:27:54.360509 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
I0307 10:27:54.360589 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
I0307 10:27:54.360737 7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
I0307 10:27:54.360743 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
I0307 10:27:54.360917 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0307 10:27:54.366509 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
I0307 10:27:54.382252 7018 start.go:303] post-start completed in 60.601074ms
I0307 10:27:54.382265 7018 fix.go:57] fixHost completed within 36.282535453s
I0307 10:27:54.382281 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:54.382411 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:54.382494 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:54.382592 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:54.382687 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:54.382812 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:27:54.383114 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.12 22 <nil> <nil>}
I0307 10:27:54.383122 7018 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0307 10:27:54.438352 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213674.566046378
I0307 10:27:54.438363 7018 fix.go:207] guest clock: 1678213674.566046378
I0307 10:27:54.438368 7018 fix.go:220] Guest: 2023-03-07 10:27:54.566046378 -0800 PST Remote: 2023-03-07 10:27:54.382269 -0800 PST m=+36.717005002 (delta=183.777378ms)
I0307 10:27:54.438390 7018 fix.go:191] guest clock delta is within tolerance: 183.777378ms
I0307 10:27:54.438395 7018 start.go:83] releasing machines lock for "multinode-260000", held for 36.33870613s
I0307 10:27:54.438412 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:54.438533 7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
I0307 10:27:54.438635 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:54.438919 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:54.439021 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:27:54.439107 7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0307 10:27:54.439131 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:54.439139 7018 ssh_runner.go:195] Run: cat /version.json
I0307 10:27:54.439150 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:27:54.439230 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:54.439270 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:27:54.439355 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:54.439367 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:27:54.439464 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:54.439484 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:27:54.439556 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:27:54.439569 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:27:54.469202 7018 command_runner.go:130] > {"iso_version": "v1.29.0-1677261626-15923", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "d5f8b7c14d0e3cd88db476786b15ed1c8f7b9a62"}
I0307 10:27:54.469345 7018 ssh_runner.go:195] Run: systemctl --version
I0307 10:27:54.473110 7018 command_runner.go:130] > systemd 247 (247)
I0307 10:27:54.473123 7018 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I0307 10:27:54.510321 7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0307 10:27:54.511264 7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0307 10:27:54.515706 7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0307 10:27:54.515766 7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0307 10:27:54.515808 7018 ssh_runner.go:195] Run: which cri-dockerd
I0307 10:27:54.518180 7018 command_runner.go:130] > /usr/bin/cri-dockerd
I0307 10:27:54.518271 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0307 10:27:54.524837 7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0307 10:27:54.535806 7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 10:27:54.546514 7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0307 10:27:54.546672 7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0307 10:27:54.546690 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:27:54.546786 7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 10:27:54.561856 7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
I0307 10:27:54.561870 7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
I0307 10:27:54.561875 7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
I0307 10:27:54.561879 7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
I0307 10:27:54.561885 7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
I0307 10:27:54.561889 7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0307 10:27:54.561893 7018 command_runner.go:130] > registry.k8s.io/pause:3.9
I0307 10:27:54.561898 7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0307 10:27:54.561902 7018 command_runner.go:130] > registry.k8s.io/pause:3.6
I0307 10:27:54.561906 7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0307 10:27:54.561912 7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0307 10:27:54.562858 7018 docker.go:630] Got preloaded images: -- stdout --
kindest/kindnetd:v20230227-15197099
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0307 10:27:54.562875 7018 docker.go:560] Images already preloaded, skipping extraction
I0307 10:27:54.562881 7018 start.go:485] detecting cgroup driver to use...
I0307 10:27:54.562957 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:27:54.574839 7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0307 10:27:54.574851 7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0307 10:27:54.575174 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0307 10:27:54.582305 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 10:27:54.589279 7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0307 10:27:54.589317 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0307 10:27:54.596289 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:27:54.603219 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 10:27:54.610180 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:27:54.617267 7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 10:27:54.624610 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 10:27:54.631553 7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 10:27:54.637786 7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0307 10:27:54.637952 7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 10:27:54.644168 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:27:54.724435 7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 10:27:54.736384 7018 start.go:485] detecting cgroup driver to use...
I0307 10:27:54.736451 7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0307 10:27:54.745963 7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0307 10:27:54.745979 7018 command_runner.go:130] > [Unit]
I0307 10:27:54.745984 7018 command_runner.go:130] > Description=Docker Application Container Engine
I0307 10:27:54.745988 7018 command_runner.go:130] > Documentation=https://docs.docker.com
I0307 10:27:54.745993 7018 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0307 10:27:54.745999 7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0307 10:27:54.746004 7018 command_runner.go:130] > StartLimitBurst=3
I0307 10:27:54.746007 7018 command_runner.go:130] > StartLimitIntervalSec=60
I0307 10:27:54.746011 7018 command_runner.go:130] > [Service]
I0307 10:27:54.746014 7018 command_runner.go:130] > Type=notify
I0307 10:27:54.746017 7018 command_runner.go:130] > Restart=on-failure
I0307 10:27:54.746024 7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0307 10:27:54.746040 7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0307 10:27:54.746047 7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0307 10:27:54.746053 7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0307 10:27:54.746068 7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0307 10:27:54.746075 7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0307 10:27:54.746081 7018 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0307 10:27:54.746090 7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0307 10:27:54.746099 7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0307 10:27:54.746104 7018 command_runner.go:130] > ExecStart=
I0307 10:27:54.746114 7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
I0307 10:27:54.746119 7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0307 10:27:54.746130 7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0307 10:27:54.746136 7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0307 10:27:54.746140 7018 command_runner.go:130] > LimitNOFILE=infinity
I0307 10:27:54.746143 7018 command_runner.go:130] > LimitNPROC=infinity
I0307 10:27:54.746147 7018 command_runner.go:130] > LimitCORE=infinity
I0307 10:27:54.746156 7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0307 10:27:54.746161 7018 command_runner.go:130] > # Only systemd 226 and above support this version.
I0307 10:27:54.746165 7018 command_runner.go:130] > TasksMax=infinity
I0307 10:27:54.746168 7018 command_runner.go:130] > TimeoutStartSec=0
I0307 10:27:54.746173 7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0307 10:27:54.746179 7018 command_runner.go:130] > Delegate=yes
I0307 10:27:54.746184 7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0307 10:27:54.746188 7018 command_runner.go:130] > KillMode=process
I0307 10:27:54.746191 7018 command_runner.go:130] > [Install]
I0307 10:27:54.746201 7018 command_runner.go:130] > WantedBy=multi-user.target
I0307 10:27:54.746263 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:27:54.754873 7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0307 10:27:54.766931 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:27:54.775320 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:27:54.784274 7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0307 10:27:54.810077 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:27:54.819002 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:27:54.830417 7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:27:54.830427 7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:27:54.830775 7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0307 10:27:54.910530 7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0307 10:27:54.991106 7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0307 10:27:54.991125 7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0307 10:27:55.002612 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:27:55.082706 7018 ssh_runner.go:195] Run: sudo systemctl restart docker
I0307 10:27:56.344251 7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.261521172s)
I0307 10:27:56.344319 7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 10:27:56.427984 7018 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0307 10:27:56.518324 7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 10:27:56.611821 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:27:56.699165 7018 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0307 10:27:56.710403 7018 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0307 10:27:56.710477 7018 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0307 10:27:56.714055 7018 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0307 10:27:56.714067 7018 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0307 10:27:56.714072 7018 command_runner.go:130] > Device: 16h/22d Inode: 853 Links: 1
I0307 10:27:56.714079 7018 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0307 10:27:56.714098 7018 command_runner.go:130] > Access: 2023-03-07 18:27:56.836416904 +0000
I0307 10:27:56.714105 7018 command_runner.go:130] > Modify: 2023-03-07 18:27:56.836416904 +0000
I0307 10:27:56.714109 7018 command_runner.go:130] > Change: 2023-03-07 18:27:56.838416903 +0000
I0307 10:27:56.714113 7018 command_runner.go:130] > Birth: -
I0307 10:27:56.714136 7018 start.go:553] Will wait 60s for crictl version
I0307 10:27:56.714180 7018 ssh_runner.go:195] Run: which crictl
I0307 10:27:56.716256 7018 command_runner.go:130] > /usr/bin/crictl
I0307 10:27:56.716479 7018 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0307 10:27:56.782605 7018 command_runner.go:130] > Version: 0.1.0
I0307 10:27:56.782630 7018 command_runner.go:130] > RuntimeName: docker
I0307 10:27:56.782659 7018 command_runner.go:130] > RuntimeVersion: 20.10.23
I0307 10:27:56.782788 7018 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0307 10:27:56.786182 7018 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0307 10:27:56.786249 7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 10:27:56.806368 7018 command_runner.go:130] > 20.10.23
I0307 10:27:56.807205 7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 10:27:56.827016 7018 command_runner.go:130] > 20.10.23
I0307 10:27:56.870119 7018 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
I0307 10:27:56.870166 7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
I0307 10:27:56.870574 7018 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0307 10:27:56.874782 7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 10:27:56.882699 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:27:56.882759 7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 10:27:56.898148 7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
I0307 10:27:56.898160 7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
I0307 10:27:56.898164 7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
I0307 10:27:56.898169 7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
I0307 10:27:56.898172 7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
I0307 10:27:56.898176 7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0307 10:27:56.898180 7018 command_runner.go:130] > registry.k8s.io/pause:3.9
I0307 10:27:56.898184 7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0307 10:27:56.898188 7018 command_runner.go:130] > registry.k8s.io/pause:3.6
I0307 10:27:56.898197 7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0307 10:27:56.898202 7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0307 10:27:56.898858 7018 docker.go:630] Got preloaded images: -- stdout --
kindest/kindnetd:v20230227-15197099
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0307 10:27:56.898867 7018 docker.go:560] Images already preloaded, skipping extraction
I0307 10:27:56.898945 7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 10:27:56.913839 7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
I0307 10:27:56.913851 7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
I0307 10:27:56.913855 7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
I0307 10:27:56.913869 7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
I0307 10:27:56.913873 7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
I0307 10:27:56.913877 7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0307 10:27:56.913881 7018 command_runner.go:130] > registry.k8s.io/pause:3.9
I0307 10:27:56.913885 7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0307 10:27:56.913889 7018 command_runner.go:130] > registry.k8s.io/pause:3.6
I0307 10:27:56.913893 7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0307 10:27:56.913900 7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0307 10:27:56.914547 7018 docker.go:630] Got preloaded images: -- stdout --
kindest/kindnetd:v20230227-15197099
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0307 10:27:56.914562 7018 cache_images.go:84] Images are preloaded, skipping loading
I0307 10:27:56.914636 7018 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0307 10:27:56.935563 7018 command_runner.go:130] > cgroupfs
I0307 10:27:56.936272 7018 cni.go:84] Creating CNI manager for ""
I0307 10:27:56.936282 7018 cni.go:136] 3 nodes found, recommending kindnet
I0307 10:27:56.936296 7018 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0307 10:27:56.936310 7018 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.12 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-260000 NodeName:multinode-260000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0307 10:27:56.936405 7018 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.12
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-260000"
kubeletExtraArgs:
node-ip: 192.168.64.12
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.12"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0307 10:27:56.936460 7018 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-260000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.12
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0307 10:27:56.936536 7018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0307 10:27:56.943109 7018 command_runner.go:130] > kubeadm
I0307 10:27:56.943116 7018 command_runner.go:130] > kubectl
I0307 10:27:56.943120 7018 command_runner.go:130] > kubelet
I0307 10:27:56.943263 7018 binaries.go:44] Found k8s binaries, skipping transfer
I0307 10:27:56.943308 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0307 10:27:56.949592 7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (449 bytes)
I0307 10:27:56.960366 7018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0307 10:27:56.970938 7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
I0307 10:27:56.982338 7018 ssh_runner.go:195] Run: grep 192.168.64.12 control-plane.minikube.internal$ /etc/hosts
I0307 10:27:56.984586 7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.12 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 10:27:56.991939 7018 certs.go:56] Setting up /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000 for IP: 192.168.64.12
I0307 10:27:56.991953 7018 certs.go:186] acquiring lock for shared ca certs: {Name:mk21aa92235e3b083ba3cf4a52527e5734aca22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 10:27:56.992091 7018 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key
I0307 10:27:56.992154 7018 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key
I0307 10:27:56.992245 7018 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key
I0307 10:27:56.992309 7018 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key.546ed142
I0307 10:27:56.992376 7018 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key
I0307 10:27:56.992385 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0307 10:27:56.992414 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0307 10:27:56.992439 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0307 10:27:56.992461 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0307 10:27:56.992479 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0307 10:27:56.992497 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0307 10:27:56.992518 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0307 10:27:56.992536 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0307 10:27:56.992623 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem (1338 bytes)
W0307 10:27:56.992661 7018 certs.go:397] ignoring /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903_empty.pem, impossibly tiny 0 bytes
I0307 10:27:56.992672 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem (1675 bytes)
I0307 10:27:56.992706 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem (1082 bytes)
I0307 10:27:56.992736 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem (1123 bytes)
I0307 10:27:56.992769 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem (1675 bytes)
I0307 10:27:56.992838 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem (1708 bytes)
I0307 10:27:56.992873 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0307 10:27:56.992892 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem -> /usr/share/ca-certificates/3903.pem
I0307 10:27:56.992913 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /usr/share/ca-certificates/39032.pem
I0307 10:27:56.993367 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0307 10:27:57.008967 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0307 10:27:57.024057 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0307 10:27:57.039253 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0307 10:27:57.054424 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0307 10:27:57.069714 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0307 10:27:57.085285 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0307 10:27:57.100487 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0307 10:27:57.116166 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0307 10:27:57.131487 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem --> /usr/share/ca-certificates/3903.pem (1338 bytes)
I0307 10:27:57.146782 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /usr/share/ca-certificates/39032.pem (1708 bytes)
I0307 10:27:57.161670 7018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0307 10:27:57.172684 7018 ssh_runner.go:195] Run: openssl version
I0307 10:27:57.175822 7018 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0307 10:27:57.176031 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/39032.pem && ln -fs /usr/share/ca-certificates/39032.pem /etc/ssl/certs/39032.pem"
I0307 10:27:57.182397 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/39032.pem
I0307 10:27:57.185195 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/39032.pem
I0307 10:27:57.185263 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/39032.pem
I0307 10:27:57.185306 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/39032.pem
I0307 10:27:57.188613 7018 command_runner.go:130] > 3ec20f2e
I0307 10:27:57.188881 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/39032.pem /etc/ssl/certs/3ec20f2e.0"
I0307 10:27:57.195955 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0307 10:27:57.203206 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0307 10:27:57.205892 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 10:27:57.206086 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 10:27:57.206121 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0307 10:27:57.209355 7018 command_runner.go:130] > b5213941
I0307 10:27:57.209587 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0307 10:27:57.216626 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3903.pem && ln -fs /usr/share/ca-certificates/3903.pem /etc/ssl/certs/3903.pem"
I0307 10:27:57.223521 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3903.pem
I0307 10:27:57.226194 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/3903.pem
I0307 10:27:57.226381 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/3903.pem
I0307 10:27:57.226417 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3903.pem
I0307 10:27:57.229589 7018 command_runner.go:130] > 51391683
I0307 10:27:57.229807 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3903.pem /etc/ssl/certs/51391683.0"
I0307 10:27:57.236882 7018 kubeadm.go:401] StartCluster: {Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false in
gress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP:}
I0307 10:27:57.236992 7018 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0307 10:27:57.252692 7018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0307 10:27:57.259210 7018 command_runner.go:130] > /var/lib/kubelet/config.yaml
I0307 10:27:57.259222 7018 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
I0307 10:27:57.259230 7018 command_runner.go:130] > /var/lib/minikube/etcd:
I0307 10:27:57.259234 7018 command_runner.go:130] > member
I0307 10:27:57.259381 7018 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0307 10:27:57.259400 7018 kubeadm.go:633] restartCluster start
I0307 10:27:57.259443 7018 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0307 10:27:57.266382 7018 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0307 10:27:57.266677 7018 kubeconfig.go:135] verify returned: extract IP: "multinode-260000" does not appear in /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:27:57.266753 7018 kubeconfig.go:146] "multinode-260000" context is missing from /Users/jenkins/minikube-integration/15985-3430/kubeconfig - will repair!
I0307 10:27:57.266945 7018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15985-3430/kubeconfig: {Name:mkea569ea3041d84fd3aeaa788f308c9891aa7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 10:27:57.267393 7018 loader.go:373] Config loaded from file: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:27:57.267600 7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 10:27:57.268098 7018 cert_rotation.go:137] Starting client certificate rotation controller
I0307 10:27:57.268266 7018 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0307 10:27:57.274410 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:57.274450 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:57.282537 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:27:57.783579 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:57.783768 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:57.794313 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:27:58.283596 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:58.283730 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:58.294644 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:27:58.782684 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:58.782873 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:58.793430 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:27:59.283543 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:59.283649 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:59.294225 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:27:59.782887 7018 api_server.go:165] Checking apiserver status ...
I0307 10:27:59.783019 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:27:59.793607 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:00.282689 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:00.282922 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:00.292782 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:00.784107 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:00.784212 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:00.794376 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:01.283293 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:01.283433 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:01.293684 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:01.783681 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:01.783913 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:01.794869 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:02.283942 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:02.284074 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:02.294517 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:02.782945 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:02.783113 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:02.794006 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:03.284588 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:03.284777 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:03.294981 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:03.783910 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:03.784171 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:03.795492 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:04.283913 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:04.284104 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:04.294550 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:04.784723 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:04.784921 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:04.795506 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:05.284742 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:05.284884 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:05.294924 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:05.784725 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:05.784834 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:05.795470 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:06.284719 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:06.284873 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:06.295722 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:06.784533 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:06.784754 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:06.795131 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:07.284699 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:07.287011 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:07.296334 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:07.296343 7018 api_server.go:165] Checking apiserver status ...
I0307 10:28:07.296382 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 10:28:07.304816 7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 10:28:07.304829 7018 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0307 10:28:07.304833 7018 kubeadm.go:1120] stopping kube-system containers ...
I0307 10:28:07.304891 7018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0307 10:28:07.321379 7018 command_runner.go:130] > da06b08e5617
I0307 10:28:07.321390 7018 command_runner.go:130] > c4559ff3518d
I0307 10:28:07.321394 7018 command_runner.go:130] > 5b66601ca9d1
I0307 10:28:07.321398 7018 command_runner.go:130] > 0ace7c6cf637
I0307 10:28:07.321401 7018 command_runner.go:130] > 37e6cf092e1c
I0307 10:28:07.321411 7018 command_runner.go:130] > ae9d394ad7a7
I0307 10:28:07.321416 7018 command_runner.go:130] > 808d83da8d84
I0307 10:28:07.321423 7018 command_runner.go:130] > 1bf0ab9eb4c5
I0307 10:28:07.321426 7018 command_runner.go:130] > 2243964fbc4d
I0307 10:28:07.321432 7018 command_runner.go:130] > 3b27eb7db4c2
I0307 10:28:07.321436 7018 command_runner.go:130] > 10d167b9d987
I0307 10:28:07.321440 7018 command_runner.go:130] > 6ac51e9516a2
I0307 10:28:07.321443 7018 command_runner.go:130] > 3e9b5dec9e21
I0307 10:28:07.321448 7018 command_runner.go:130] > 0721a87b433b
I0307 10:28:07.321452 7018 command_runner.go:130] > aef4edf5b492
I0307 10:28:07.321456 7018 command_runner.go:130] > cfcf920b7378
I0307 10:28:07.322130 7018 docker.go:456] Stopping containers: [da06b08e5617 c4559ff3518d 5b66601ca9d1 0ace7c6cf637 37e6cf092e1c ae9d394ad7a7 808d83da8d84 1bf0ab9eb4c5 2243964fbc4d 3b27eb7db4c2 10d167b9d987 6ac51e9516a2 3e9b5dec9e21 0721a87b433b aef4edf5b492 cfcf920b7378]
I0307 10:28:07.322197 7018 ssh_runner.go:195] Run: docker stop da06b08e5617 c4559ff3518d 5b66601ca9d1 0ace7c6cf637 37e6cf092e1c ae9d394ad7a7 808d83da8d84 1bf0ab9eb4c5 2243964fbc4d 3b27eb7db4c2 10d167b9d987 6ac51e9516a2 3e9b5dec9e21 0721a87b433b aef4edf5b492 cfcf920b7378
I0307 10:28:07.338863 7018 command_runner.go:130] > da06b08e5617
I0307 10:28:07.338874 7018 command_runner.go:130] > c4559ff3518d
I0307 10:28:07.339268 7018 command_runner.go:130] > 5b66601ca9d1
I0307 10:28:07.339476 7018 command_runner.go:130] > 0ace7c6cf637
I0307 10:28:07.339531 7018 command_runner.go:130] > 37e6cf092e1c
I0307 10:28:07.339608 7018 command_runner.go:130] > ae9d394ad7a7
I0307 10:28:07.339615 7018 command_runner.go:130] > 808d83da8d84
I0307 10:28:07.339735 7018 command_runner.go:130] > 1bf0ab9eb4c5
I0307 10:28:07.339806 7018 command_runner.go:130] > 2243964fbc4d
I0307 10:28:07.339952 7018 command_runner.go:130] > 3b27eb7db4c2
I0307 10:28:07.340042 7018 command_runner.go:130] > 10d167b9d987
I0307 10:28:07.340172 7018 command_runner.go:130] > 6ac51e9516a2
I0307 10:28:07.340231 7018 command_runner.go:130] > 3e9b5dec9e21
I0307 10:28:07.340237 7018 command_runner.go:130] > 0721a87b433b
I0307 10:28:07.340416 7018 command_runner.go:130] > aef4edf5b492
I0307 10:28:07.340541 7018 command_runner.go:130] > cfcf920b7378
I0307 10:28:07.341444 7018 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0307 10:28:07.352567 7018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0307 10:28:07.358762 7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0307 10:28:07.358772 7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0307 10:28:07.358778 7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0307 10:28:07.358784 7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0307 10:28:07.358923 7018 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0307 10:28:07.358971 7018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0307 10:28:07.365297 7018 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0307 10:28:07.365309 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:07.435009 7018 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0307 10:28:07.435021 7018 command_runner.go:130] > [certs] Using existing ca certificate authority
I0307 10:28:07.435026 7018 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0307 10:28:07.435249 7018 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0307 10:28:07.435474 7018 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
I0307 10:28:07.435692 7018 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
I0307 10:28:07.436004 7018 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
I0307 10:28:07.436233 7018 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
I0307 10:28:07.436509 7018 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
I0307 10:28:07.436724 7018 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0307 10:28:07.436961 7018 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
I0307 10:28:07.437121 7018 command_runner.go:130] > [certs] Using the existing "sa" key
I0307 10:28:07.438004 7018 command_runner.go:130] ! W0307 18:28:07.567847 1206 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:07.438020 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:07.477158 7018 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0307 10:28:07.530979 7018 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0307 10:28:07.671495 7018 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0307 10:28:07.806243 7018 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0307 10:28:08.012059 7018 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0307 10:28:08.013940 7018 command_runner.go:130] ! W0307 18:28:07.610432 1212 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:08.013962 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:08.064445 7018 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0307 10:28:08.064458 7018 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0307 10:28:08.064462 7018 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0307 10:28:08.158176 7018 command_runner.go:130] ! W0307 18:28:08.188188 1218 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:08.158212 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:08.205939 7018 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0307 10:28:08.205952 7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0307 10:28:08.207362 7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0307 10:28:08.208239 7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0307 10:28:08.211123 7018 command_runner.go:130] ! W0307 18:28:08.337529 1240 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:08.211182 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:08.268874 7018 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0307 10:28:08.276469 7018 command_runner.go:130] ! W0307 18:28:08.400815 1250 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:08.276569 7018 api_server.go:51] waiting for apiserver process to appear ...
I0307 10:28:08.276628 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:08.791796 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:09.291418 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:09.790079 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:10.289945 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:10.300303 7018 command_runner.go:130] > 1604
I0307 10:28:10.300322 7018 api_server.go:71] duration metric: took 2.023748028s to wait for apiserver process to appear ...
I0307 10:28:10.300332 7018 api_server.go:87] waiting for apiserver healthz status ...
I0307 10:28:10.300340 7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
I0307 10:28:13.002874 7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0307 10:28:13.002891 7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0307 10:28:13.505043 7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
I0307 10:28:13.511549 7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0307 10:28:13.511564 7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0307 10:28:14.003030 7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
I0307 10:28:14.007459 7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0307 10:28:14.007479 7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0307 10:28:14.504449 7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
I0307 10:28:14.508376 7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 200:
ok
I0307 10:28:14.508433 7018 round_trippers.go:463] GET https://192.168.64.12:8443/version
I0307 10:28:14.508438 7018 round_trippers.go:469] Request Headers:
I0307 10:28:14.508446 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:14.508452 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:14.516136 7018 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0307 10:28:14.516148 7018 round_trippers.go:577] Response Headers:
I0307 10:28:14.516154 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:14.516158 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:14.516163 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:14.516168 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:14.516173 7018 round_trippers.go:580] Content-Length: 263
I0307 10:28:14.516178 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:14 GMT
I0307 10:28:14.516185 7018 round_trippers.go:580] Audit-Id: 364007ce-aca2-49dd-9978-704f40503cf3
I0307 10:28:14.516202 7018 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.2",
"gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
"gitTreeState": "clean",
"buildDate": "2023-02-22T13:32:22Z",
"goVersion": "go1.19.6",
"compiler": "gc",
"platform": "linux/amd64"
}
I0307 10:28:14.516246 7018 api_server.go:140] control plane version: v1.26.2
I0307 10:28:14.516254 7018 api_server.go:130] duration metric: took 4.215899257s to wait for apiserver health ...
I0307 10:28:14.516265 7018 cni.go:84] Creating CNI manager for ""
I0307 10:28:14.516271 7018 cni.go:136] 3 nodes found, recommending kindnet
I0307 10:28:14.538513 7018 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0307 10:28:14.558703 7018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0307 10:28:14.565010 7018 command_runner.go:130] > File: /opt/cni/bin/portmap
I0307 10:28:14.565023 7018 command_runner.go:130] > Size: 2798344 Blocks: 5472 IO Block: 4096 regular file
I0307 10:28:14.565030 7018 command_runner.go:130] > Device: 11h/17d Inode: 3542 Links: 1
I0307 10:28:14.565035 7018 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0307 10:28:14.565040 7018 command_runner.go:130] > Access: 2023-03-07 18:27:25.800133630 +0000
I0307 10:28:14.565044 7018 command_runner.go:130] > Modify: 2023-02-24 23:58:49.000000000 +0000
I0307 10:28:14.565049 7018 command_runner.go:130] > Change: 2023-03-07 18:27:24.520133706 +0000
I0307 10:28:14.565052 7018 command_runner.go:130] > Birth: -
I0307 10:28:14.565080 7018 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
I0307 10:28:14.565086 7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0307 10:28:14.614484 7018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0307 10:28:15.463255 7018 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0307 10:28:15.465520 7018 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0307 10:28:15.467209 7018 command_runner.go:130] > serviceaccount/kindnet unchanged
I0307 10:28:15.486465 7018 command_runner.go:130] > daemonset.apps/kindnet configured
I0307 10:28:15.487964 7018 system_pods.go:43] waiting for kube-system pods to appear ...
I0307 10:28:15.488018 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:15.488023 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.488030 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.488035 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.490928 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:15.490936 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.490945 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.490952 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.490959 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.490966 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.490971 7018 round_trippers.go:580] Audit-Id: fbf2e35b-55b7-466f-9275-31e56ce04183
I0307 10:28:15.490978 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.492557 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1032"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81648 chars]
I0307 10:28:15.495381 7018 system_pods.go:59] 12 kube-system pods found
I0307 10:28:15.495395 7018 system_pods.go:61] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
I0307 10:28:15.495400 7018 system_pods.go:61] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
I0307 10:28:15.495403 7018 system_pods.go:61] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
I0307 10:28:15.495407 7018 system_pods.go:61] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
I0307 10:28:15.495411 7018 system_pods.go:61] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
I0307 10:28:15.495415 7018 system_pods.go:61] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
I0307 10:28:15.495421 7018 system_pods.go:61] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0307 10:28:15.495425 7018 system_pods.go:61] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
I0307 10:28:15.495429 7018 system_pods.go:61] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
I0307 10:28:15.495433 7018 system_pods.go:61] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
I0307 10:28:15.495437 7018 system_pods.go:61] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
I0307 10:28:15.495441 7018 system_pods.go:61] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
I0307 10:28:15.495444 7018 system_pods.go:74] duration metric: took 7.473493ms to wait for pod list to return data ...
I0307 10:28:15.495451 7018 node_conditions.go:102] verifying NodePressure condition ...
I0307 10:28:15.495484 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
I0307 10:28:15.495488 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.495494 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.495499 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.497193 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.497203 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.497209 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.497215 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.497225 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.497237 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.497246 7018 round_trippers.go:580] Audit-Id: 87494186-1238-43d5-866d-3fb8cf3ac670
I0307 10:28:15.497252 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.497439 7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1032"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16457 chars]
I0307 10:28:15.497964 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:15.497980 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:15.497991 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:15.497994 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:15.497998 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:15.498001 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:15.498005 7018 node_conditions.go:105] duration metric: took 2.549988ms to run NodePressure ...
I0307 10:28:15.498014 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0307 10:28:15.613921 7018 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0307 10:28:15.647095 7018 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0307 10:28:15.648104 7018 command_runner.go:130] ! W0307 18:28:15.688091 2114 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:15.648194 7018 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0307 10:28:15.648246 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
I0307 10:28:15.648251 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.648257 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.648262 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.650635 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:15.650643 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.650648 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.650653 7018 round_trippers.go:580] Audit-Id: cb509b59-97eb-4381-8070-69cc8abdab39
I0307 10:28:15.650664 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.650670 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.650675 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.650683 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.651119 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1034"},"items":[{"metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"288","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28366 chars]
I0307 10:28:15.651785 7018 kubeadm.go:784] kubelet initialised
I0307 10:28:15.651796 7018 kubeadm.go:785] duration metric: took 3.59091ms waiting for restarted kubelet to initialise ...
I0307 10:28:15.651802 7018 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:15.651829 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:15.651834 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.651840 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.651856 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.654797 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:15.654807 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.654812 7018 round_trippers.go:580] Audit-Id: a9d90e98-0ed7-4ce3-b64a-cc82a3347b6f
I0307 10:28:15.654817 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.654823 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.654828 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.654832 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.654837 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.656020 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1034"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81648 chars]
I0307 10:28:15.657761 7018 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
I0307 10:28:15.657793 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:15.657798 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.657805 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.657811 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.659065 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.659077 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.659085 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.659092 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.659098 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.659104 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.659109 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.659115 7018 round_trippers.go:580] Audit-Id: eb2db07a-7079-4adb-a12f-c3919e2af0f0
I0307 10:28:15.659276 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6281 chars]
I0307 10:28:15.659508 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:15.659514 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.659520 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.659526 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.660689 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.660696 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.660701 7018 round_trippers.go:580] Audit-Id: 4dd3efdc-1609-4f2d-9ae0-4a842093d527
I0307 10:28:15.660706 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.660711 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.660717 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.660724 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.660734 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.660828 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:15.660996 7018 pod_ready.go:97] node "multinode-260000" hosting pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.661003 7018 pod_ready.go:81] duration metric: took 3.233228ms waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
E0307 10:28:15.661009 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.661014 7018 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:15.661036 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
I0307 10:28:15.661040 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.661046 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.661051 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.662218 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.662226 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.662232 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.662238 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.662244 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.662249 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.662254 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.662258 7018 round_trippers.go:580] Audit-Id: eeb6ea95-4efc-44d3-86d7-f3e9abc4f441
I0307 10:28:15.662373 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"288","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5846 chars]
I0307 10:28:15.662566 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:15.662572 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.662578 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.662586 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.663695 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.663702 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.663708 7018 round_trippers.go:580] Audit-Id: 0c08723d-f6d6-4c3f-bc19-ce14073bddc8
I0307 10:28:15.663713 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.663718 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.663724 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.663728 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.663733 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.663841 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:15.664005 7018 pod_ready.go:97] node "multinode-260000" hosting pod "etcd-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.664012 7018 pod_ready.go:81] duration metric: took 2.993408ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
E0307 10:28:15.664024 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "etcd-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.664031 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:15.664054 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
I0307 10:28:15.664059 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.664064 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.664070 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.665133 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.665140 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.665145 7018 round_trippers.go:580] Audit-Id: d8155bb7-ed68-40c6-a807-4b433cb29ded
I0307 10:28:15.665164 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.665181 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.665188 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.665193 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.665199 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.665314 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"269","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7383 chars]
I0307 10:28:15.665528 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:15.665534 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.665540 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.665546 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.666728 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.666735 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.666743 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.666752 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.666761 7018 round_trippers.go:580] Audit-Id: 90f98c95-77ef-4f41-8b0d-68655aa67aef
I0307 10:28:15.666768 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.666773 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.666778 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.666842 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:15.667008 7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-apiserver-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.667016 7018 pod_ready.go:81] duration metric: took 2.97888ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
E0307 10:28:15.667021 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-apiserver-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.667025 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:15.688093 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
I0307 10:28:15.688109 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.688116 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.688121 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.689605 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:15.689619 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.689626 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.689631 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.689636 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.689642 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:15 GMT
I0307 10:28:15.689649 7018 round_trippers.go:580] Audit-Id: 30247593-c3f9-4f0b-8ec3-84987c2d98e7
I0307 10:28:15.689656 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.689775 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1031","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7421 chars]
I0307 10:28:15.888328 7018 request.go:622] Waited for 198.258292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:15.888357 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:15.888362 7018 round_trippers.go:469] Request Headers:
I0307 10:28:15.888370 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:15.888378 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:15.890719 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:15.890732 7018 round_trippers.go:577] Response Headers:
I0307 10:28:15.890738 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:15.890742 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:15.890748 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:15.890753 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:16 GMT
I0307 10:28:15.890757 7018 round_trippers.go:580] Audit-Id: 2c7858e8-abf5-4b14-91d6-55537d022b63
I0307 10:28:15.890762 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:15.890832 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:15.891019 7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-controller-manager-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.891027 7018 pod_ready.go:81] duration metric: took 223.996649ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
E0307 10:28:15.891033 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-controller-manager-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:15.891041 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
I0307 10:28:16.088078 7018 request.go:622] Waited for 197.006181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
I0307 10:28:16.088110 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
I0307 10:28:16.088145 7018 round_trippers.go:469] Request Headers:
I0307 10:28:16.088152 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:16.088171 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:16.090139 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:16.090148 7018 round_trippers.go:577] Response Headers:
I0307 10:28:16.090153 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:16 GMT
I0307 10:28:16.090158 7018 round_trippers.go:580] Audit-Id: 33bdce0d-afd5-41b3-be54-1778f67df277
I0307 10:28:16.090163 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:16.090168 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:16.090174 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:16.090180 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:16.090265 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"359","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
I0307 10:28:16.289549 7018 request.go:622] Waited for 199.030503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:16.289608 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:16.289613 7018 round_trippers.go:469] Request Headers:
I0307 10:28:16.289619 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:16.289625 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:16.291464 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:16.291474 7018 round_trippers.go:577] Response Headers:
I0307 10:28:16.291480 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:16.291486 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:16.291491 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:16.291497 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:16 GMT
I0307 10:28:16.291502 7018 round_trippers.go:580] Audit-Id: 304d1604-8237-4817-97b8-2398828df2aa
I0307 10:28:16.291512 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:16.291606 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:16.291814 7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-proxy-8qwhq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:16.291823 7018 pod_ready.go:81] duration metric: took 400.77463ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
E0307 10:28:16.291829 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-proxy-8qwhq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:16.291845 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:16.488974 7018 request.go:622] Waited for 197.089772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:16.489010 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:16.489014 7018 round_trippers.go:469] Request Headers:
I0307 10:28:16.489021 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:16.489028 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:16.490668 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:16.490678 7018 round_trippers.go:577] Response Headers:
I0307 10:28:16.490684 7018 round_trippers.go:580] Audit-Id: f7cf2cf1-fe75-45fb-b387-3c47e4ca38bf
I0307 10:28:16.490689 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:16.490695 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:16.490699 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:16.490705 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:16.490710 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:16 GMT
I0307 10:28:16.490783 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"469","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
I0307 10:28:16.688164 7018 request.go:622] Waited for 197.086665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:16.688201 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:16.688207 7018 round_trippers.go:469] Request Headers:
I0307 10:28:16.688216 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:16.688224 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:16.690320 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:16.690331 7018 round_trippers.go:577] Response Headers:
I0307 10:28:16.690337 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:16.690347 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:16.690354 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:16.690360 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:16 GMT
I0307 10:28:16.690365 7018 round_trippers.go:580] Audit-Id: fafa8c79-056c-4482-a7d3-9af678647000
I0307 10:28:16.690370 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:16.690435 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b","resourceVersion":"812","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4512 chars]
I0307 10:28:16.690610 7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:16.690616 7018 pod_ready.go:81] duration metric: took 398.761593ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:16.690622 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:16.888997 7018 request.go:622] Waited for 198.34143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:16.889083 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:16.889091 7018 round_trippers.go:469] Request Headers:
I0307 10:28:16.889099 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:16.889107 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:16.890960 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:16.890976 7018 round_trippers.go:577] Response Headers:
I0307 10:28:16.890988 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:16.890997 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:16.891006 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:16.891013 7018 round_trippers.go:580] Audit-Id: 2a6b83fb-355a-47d1-a5fb-041011c34ce5
I0307 10:28:16.891021 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:16.891029 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:16.891126 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I0307 10:28:17.089042 7018 request.go:622] Waited for 197.667165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:17.089099 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:17.089104 7018 round_trippers.go:469] Request Headers:
I0307 10:28:17.089110 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:17.089123 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:17.092228 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:17.092240 7018 round_trippers.go:577] Response Headers:
I0307 10:28:17.092249 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:17.092256 7018 round_trippers.go:580] Audit-Id: 4d8ae72e-fdde-4d59-9a71-91d0c3ee68a0
I0307 10:28:17.092264 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:17.092271 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:17.092276 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:17.092282 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:17.092354 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1019","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4450 chars]
I0307 10:28:17.092536 7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:17.092542 7018 pod_ready.go:81] duration metric: took 401.914192ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:17.092550 7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:17.289090 7018 request.go:622] Waited for 196.506508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:17.289121 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:17.289126 7018 round_trippers.go:469] Request Headers:
I0307 10:28:17.289133 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:17.289140 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:17.290898 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:17.290909 7018 round_trippers.go:577] Response Headers:
I0307 10:28:17.290915 7018 round_trippers.go:580] Audit-Id: 9fb63a2b-6315-4a56-8919-8e3ff05df64c
I0307 10:28:17.290920 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:17.290926 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:17.290932 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:17.290936 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:17.290941 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:17.291122 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1035","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5133 chars]
I0307 10:28:17.488710 7018 request.go:622] Waited for 197.357013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:17.488741 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:17.488773 7018 round_trippers.go:469] Request Headers:
I0307 10:28:17.488780 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:17.488786 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:17.492401 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:17.492411 7018 round_trippers.go:577] Response Headers:
I0307 10:28:17.492417 7018 round_trippers.go:580] Audit-Id: 8a48812e-9efb-405d-92a7-d9eab408cfe7
I0307 10:28:17.492429 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:17.492435 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:17.492439 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:17.492445 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:17.492449 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:17.492517 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:17.492711 7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-scheduler-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:17.492718 7018 pod_ready.go:81] duration metric: took 400.162814ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
E0307 10:28:17.492724 7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-scheduler-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
I0307 10:28:17.492729 7018 pod_ready.go:38] duration metric: took 1.8409126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:17.492740 7018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0307 10:28:17.500400 7018 command_runner.go:130] > -16
I0307 10:28:17.500574 7018 ops.go:34] apiserver oom_adj: -16
I0307 10:28:17.500584 7018 kubeadm.go:637] restartCluster took 20.241085671s
I0307 10:28:17.500589 7018 kubeadm.go:403] StartCluster complete in 20.26361982s
I0307 10:28:17.500600 7018 settings.go:142] acquiring lock: {Name:mk4d055ee1d778ec2752c0ce26b6fb536462adb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 10:28:17.500678 7018 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:28:17.501023 7018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15985-3430/kubeconfig: {Name:mkea569ea3041d84fd3aeaa788f308c9891aa7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 10:28:17.501262 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0307 10:28:17.501294 7018 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0307 10:28:17.546290 7018 out.go:177] * Enabled addons:
I0307 10:28:17.501457 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:28:17.501669 7018 loader.go:373] Config loaded from file: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:28:17.583590 7018 addons.go:499] enable addons completed in 82.276784ms: enabled=[]
I0307 10:28:17.583795 7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 10:28:17.584004 7018 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0307 10:28:17.584011 7018 round_trippers.go:469] Request Headers:
I0307 10:28:17.584017 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:17.584022 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:17.585901 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:17.585911 7018 round_trippers.go:577] Response Headers:
I0307 10:28:17.585917 7018 round_trippers.go:580] Audit-Id: 381c106f-61b9-4164-8d45-b690984d5352
I0307 10:28:17.585927 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:17.585933 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:17.585937 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:17.585942 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:17.585947 7018 round_trippers.go:580] Content-Length: 292
I0307 10:28:17.585952 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:17.585965 7018 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9058bb7-5525-4245-a92a-3b0f0144c5d4","resourceVersion":"1033","creationTimestamp":"2023-03-07T18:18:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0307 10:28:17.586053 7018 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-260000" context rescaled to 1 replicas
I0307 10:28:17.586069 7018 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0307 10:28:17.598551 7018 command_runner.go:130] > apiVersion: v1
I0307 10:28:17.607409 7018 command_runner.go:130] > data:
I0307 10:28:17.607416 7018 command_runner.go:130] > Corefile: |
I0307 10:28:17.607423 7018 command_runner.go:130] > .:53 {
I0307 10:28:17.607394 7018 out.go:177] * Verifying Kubernetes components...
I0307 10:28:17.607432 7018 command_runner.go:130] > log
I0307 10:28:17.665368 7018 command_runner.go:130] > errors
I0307 10:28:17.665380 7018 command_runner.go:130] > health {
I0307 10:28:17.665387 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 10:28:17.665390 7018 command_runner.go:130] > lameduck 5s
I0307 10:28:17.665471 7018 command_runner.go:130] > }
I0307 10:28:17.665485 7018 command_runner.go:130] > ready
I0307 10:28:17.665501 7018 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0307 10:28:17.665515 7018 command_runner.go:130] > pods insecure
I0307 10:28:17.665530 7018 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0307 10:28:17.665540 7018 command_runner.go:130] > ttl 30
I0307 10:28:17.665547 7018 command_runner.go:130] > }
I0307 10:28:17.665555 7018 command_runner.go:130] > prometheus :9153
I0307 10:28:17.665561 7018 command_runner.go:130] > hosts {
I0307 10:28:17.665581 7018 command_runner.go:130] > 192.168.64.1 host.minikube.internal
I0307 10:28:17.665589 7018 command_runner.go:130] > fallthrough
I0307 10:28:17.665596 7018 command_runner.go:130] > }
I0307 10:28:17.665604 7018 command_runner.go:130] > forward . /etc/resolv.conf {
I0307 10:28:17.665613 7018 command_runner.go:130] > max_concurrent 1000
I0307 10:28:17.665622 7018 command_runner.go:130] > }
I0307 10:28:17.665633 7018 command_runner.go:130] > cache 30
I0307 10:28:17.665648 7018 command_runner.go:130] > loop
I0307 10:28:17.665659 7018 command_runner.go:130] > reload
I0307 10:28:17.665673 7018 command_runner.go:130] > loadbalance
I0307 10:28:17.665700 7018 command_runner.go:130] > }
I0307 10:28:17.665714 7018 command_runner.go:130] > kind: ConfigMap
I0307 10:28:17.665724 7018 command_runner.go:130] > metadata:
I0307 10:28:17.665738 7018 command_runner.go:130] > creationTimestamp: "2023-03-07T18:18:28Z"
I0307 10:28:17.665750 7018 command_runner.go:130] > name: coredns
I0307 10:28:17.665761 7018 command_runner.go:130] > namespace: kube-system
I0307 10:28:17.665769 7018 command_runner.go:130] > resourceVersion: "361"
I0307 10:28:17.665778 7018 command_runner.go:130] > uid: ab4f9271-2ad1-469a-9991-ac0e7cd4eee1
I0307 10:28:17.665875 7018 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0307 10:28:17.677281 7018 node_ready.go:35] waiting up to 6m0s for node "multinode-260000" to be "Ready" ...
I0307 10:28:17.688141 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:17.688153 7018 round_trippers.go:469] Request Headers:
I0307 10:28:17.688160 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:17.688165 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:17.699560 7018 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0307 10:28:17.699573 7018 round_trippers.go:577] Response Headers:
I0307 10:28:17.699579 7018 round_trippers.go:580] Audit-Id: b0a8d418-5306-402d-aafe-b01480d098d1
I0307 10:28:17.699584 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:17.699588 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:17.699594 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:17.699602 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:17.699607 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:17 GMT
I0307 10:28:17.699666 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:18.201280 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:18.201301 7018 round_trippers.go:469] Request Headers:
I0307 10:28:18.201313 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:18.201324 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:18.205520 7018 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0307 10:28:18.205536 7018 round_trippers.go:577] Response Headers:
I0307 10:28:18.205545 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:18 GMT
I0307 10:28:18.205551 7018 round_trippers.go:580] Audit-Id: 93568139-27e9-412b-aabc-a063cf381701
I0307 10:28:18.205556 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:18.205560 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:18.205566 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:18.205571 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:18.205679 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:18.700510 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:18.700532 7018 round_trippers.go:469] Request Headers:
I0307 10:28:18.700545 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:18.700556 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:18.703654 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:18.703670 7018 round_trippers.go:577] Response Headers:
I0307 10:28:18.703678 7018 round_trippers.go:580] Audit-Id: fe05d8ff-851d-43ec-87d1-ea8137b7dbe8
I0307 10:28:18.703684 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:18.703691 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:18.703714 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:18.703725 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:18.703732 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:18 GMT
I0307 10:28:18.703813 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:19.202177 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:19.202200 7018 round_trippers.go:469] Request Headers:
I0307 10:28:19.202214 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:19.202227 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:19.205274 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:19.205290 7018 round_trippers.go:577] Response Headers:
I0307 10:28:19.205298 7018 round_trippers.go:580] Audit-Id: 01e6aee3-dfa5-4ab3-b092-2707828ba795
I0307 10:28:19.205331 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:19.205342 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:19.205349 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:19.205357 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:19.205364 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:19 GMT
I0307 10:28:19.205470 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:19.700708 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:19.700729 7018 round_trippers.go:469] Request Headers:
I0307 10:28:19.700741 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:19.700751 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:19.703406 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:19.703422 7018 round_trippers.go:577] Response Headers:
I0307 10:28:19.703431 7018 round_trippers.go:580] Audit-Id: 3a975007-4ad9-4952-af4f-5375799e6a1a
I0307 10:28:19.703439 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:19.703445 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:19.703452 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:19.703458 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:19.703466 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:19 GMT
I0307 10:28:19.703543 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:19.703788 7018 node_ready.go:58] node "multinode-260000" has status "Ready":"False"
I0307 10:28:20.200489 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:20.200509 7018 round_trippers.go:469] Request Headers:
I0307 10:28:20.200521 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:20.200531 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:20.203162 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:20.203178 7018 round_trippers.go:577] Response Headers:
I0307 10:28:20.203186 7018 round_trippers.go:580] Audit-Id: a8a0b987-0c00-4eb2-84cc-bb8ba63cb67a
I0307 10:28:20.203193 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:20.203202 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:20.203212 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:20.203220 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:20.203228 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:20 GMT
I0307 10:28:20.203489 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:20.700672 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:20.700696 7018 round_trippers.go:469] Request Headers:
I0307 10:28:20.700709 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:20.700725 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:20.703549 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:20.703565 7018 round_trippers.go:577] Response Headers:
I0307 10:28:20.703573 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:20.703580 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:20.703586 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:20.703593 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:20 GMT
I0307 10:28:20.703599 7018 round_trippers.go:580] Audit-Id: efe8aac9-6cb0-4496-83f5-15dd81197a83
I0307 10:28:20.703607 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:20.703677 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:21.201352 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:21.201373 7018 round_trippers.go:469] Request Headers:
I0307 10:28:21.201385 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:21.201395 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:21.204173 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:21.204190 7018 round_trippers.go:577] Response Headers:
I0307 10:28:21.204197 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:21 GMT
I0307 10:28:21.204205 7018 round_trippers.go:580] Audit-Id: be92e2ce-4712-4f1e-861a-703e11d6cba4
I0307 10:28:21.204220 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:21.204229 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:21.204235 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:21.204243 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:21.204341 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:21.700804 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:21.700827 7018 round_trippers.go:469] Request Headers:
I0307 10:28:21.700840 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:21.700851 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:21.703563 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:21.703580 7018 round_trippers.go:577] Response Headers:
I0307 10:28:21.703588 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:21.703595 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:21.703602 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:21 GMT
I0307 10:28:21.703609 7018 round_trippers.go:580] Audit-Id: d76a302b-b114-4fb6-a945-db5c79d73c04
I0307 10:28:21.703616 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:21.703622 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:21.703693 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:21.703979 7018 node_ready.go:58] node "multinode-260000" has status "Ready":"False"
I0307 10:28:22.200196 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:22.200216 7018 round_trippers.go:469] Request Headers:
I0307 10:28:22.200229 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:22.200239 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:22.202586 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:22.202599 7018 round_trippers.go:577] Response Headers:
I0307 10:28:22.202606 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:22.202614 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:22 GMT
I0307 10:28:22.202622 7018 round_trippers.go:580] Audit-Id: 4ff0cc55-c046-416f-9185-daae0bebce4a
I0307 10:28:22.202632 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:22.202639 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:22.202696 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:22.202811 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:22.700709 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:22.700730 7018 round_trippers.go:469] Request Headers:
I0307 10:28:22.700742 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:22.700752 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:22.702936 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:22.723882 7018 round_trippers.go:577] Response Headers:
I0307 10:28:22.723896 7018 round_trippers.go:580] Audit-Id: 29769d58-0043-4d39-82f0-cccd4df4015a
I0307 10:28:22.723957 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:22.723969 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:22.723978 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:22.723988 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:22.723998 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:22 GMT
I0307 10:28:22.724094 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:23.200620 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:23.200644 7018 round_trippers.go:469] Request Headers:
I0307 10:28:23.200657 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:23.200667 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:23.203465 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:23.203481 7018 round_trippers.go:577] Response Headers:
I0307 10:28:23.203489 7018 round_trippers.go:580] Audit-Id: 9e76918b-04a7-460f-b7a3-1bb26e8c0971
I0307 10:28:23.203496 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:23.203502 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:23.203510 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:23.203517 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:23.203523 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:23 GMT
I0307 10:28:23.203617 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
I0307 10:28:23.700169 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:23.700191 7018 round_trippers.go:469] Request Headers:
I0307 10:28:23.700203 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:23.700213 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:23.703029 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:23.703045 7018 round_trippers.go:577] Response Headers:
I0307 10:28:23.703053 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:23.703059 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:23.703067 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:23.703076 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:23.703088 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:23 GMT
I0307 10:28:23.703098 7018 round_trippers.go:580] Audit-Id: ef8f12d5-7107-46fa-a902-ce29a6cd21c5
I0307 10:28:23.703227 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:23.703480 7018 node_ready.go:49] node "multinode-260000" has status "Ready":"True"
I0307 10:28:23.703494 7018 node_ready.go:38] duration metric: took 6.026171359s waiting for node "multinode-260000" to be "Ready" ...
I0307 10:28:23.703502 7018 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:23.703549 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:23.703555 7018 round_trippers.go:469] Request Headers:
I0307 10:28:23.703563 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:23.703572 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:23.705759 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:23.705769 7018 round_trippers.go:577] Response Headers:
I0307 10:28:23.705780 7018 round_trippers.go:580] Audit-Id: 67287338-b563-4ece-963d-6a23473c12f5
I0307 10:28:23.705788 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:23.705795 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:23.705804 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:23.705811 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:23.705818 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:23 GMT
I0307 10:28:23.706556 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1094"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83638 chars]
I0307 10:28:23.708320 7018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
I0307 10:28:23.708353 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:23.708358 7018 round_trippers.go:469] Request Headers:
I0307 10:28:23.708374 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:23.708381 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:23.709654 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:23.709668 7018 round_trippers.go:577] Response Headers:
I0307 10:28:23.709674 7018 round_trippers.go:580] Audit-Id: 31e97546-40fd-4948-9b6f-419bdad39a05
I0307 10:28:23.709680 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:23.709685 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:23.709690 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:23.709696 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:23.709701 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:23 GMT
I0307 10:28:23.709974 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:23.710200 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:23.710205 7018 round_trippers.go:469] Request Headers:
I0307 10:28:23.710212 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:23.710218 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:23.711266 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:23.711276 7018 round_trippers.go:577] Response Headers:
I0307 10:28:23.711284 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:23.711291 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:23.711299 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:23.711307 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:23.711316 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:23 GMT
I0307 10:28:23.711324 7018 round_trippers.go:580] Audit-Id: ef253b5e-8ae9-4c22-97b4-635ece1c07f1
I0307 10:28:23.711443 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:24.211832 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:24.211854 7018 round_trippers.go:469] Request Headers:
I0307 10:28:24.211868 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:24.211879 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:24.214134 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:24.214147 7018 round_trippers.go:577] Response Headers:
I0307 10:28:24.214155 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:24.214161 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:24.214169 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:24 GMT
I0307 10:28:24.214176 7018 round_trippers.go:580] Audit-Id: 7cceac8c-72f2-43b3-a70c-da8298a351ea
I0307 10:28:24.214183 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:24.214189 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:24.214267 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:24.214622 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:24.214631 7018 round_trippers.go:469] Request Headers:
I0307 10:28:24.214639 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:24.214647 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:24.216139 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:24.216148 7018 round_trippers.go:577] Response Headers:
I0307 10:28:24.216154 7018 round_trippers.go:580] Audit-Id: 651af490-ed9e-4eba-a495-32b2210d00c4
I0307 10:28:24.216159 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:24.216167 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:24.216176 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:24.216187 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:24.216193 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:24 GMT
I0307 10:28:24.216294 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:24.712583 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:24.712604 7018 round_trippers.go:469] Request Headers:
I0307 10:28:24.712617 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:24.712627 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:24.715128 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:24.715141 7018 round_trippers.go:577] Response Headers:
I0307 10:28:24.715151 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:24.715174 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:24.715202 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:24.715215 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:24 GMT
I0307 10:28:24.715229 7018 round_trippers.go:580] Audit-Id: 64f8c7b5-e206-4888-b04e-57f95c098459
I0307 10:28:24.715263 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:24.715362 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:24.715724 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:24.715733 7018 round_trippers.go:469] Request Headers:
I0307 10:28:24.715741 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:24.715748 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:24.717117 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:24.717131 7018 round_trippers.go:577] Response Headers:
I0307 10:28:24.717139 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:24.717149 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:24.717158 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:24 GMT
I0307 10:28:24.717165 7018 round_trippers.go:580] Audit-Id: 39facfb8-6882-4093-a54a-be9e41cdcd8a
I0307 10:28:24.717189 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:24.717203 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:24.717297 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:25.211941 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:25.211961 7018 round_trippers.go:469] Request Headers:
I0307 10:28:25.211973 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:25.211984 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:25.214996 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:25.215012 7018 round_trippers.go:577] Response Headers:
I0307 10:28:25.215056 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:25.215076 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:25.215089 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:25.215121 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:25 GMT
I0307 10:28:25.215133 7018 round_trippers.go:580] Audit-Id: eab464a3-fd8c-4abd-92da-a9e3fab09b87
I0307 10:28:25.215153 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:25.215232 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:25.215588 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:25.215596 7018 round_trippers.go:469] Request Headers:
I0307 10:28:25.215604 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:25.215611 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:25.216989 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:25.217000 7018 round_trippers.go:577] Response Headers:
I0307 10:28:25.217005 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:25.217010 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:25 GMT
I0307 10:28:25.217021 7018 round_trippers.go:580] Audit-Id: 1b48fc62-d0ae-42f1-a567-d263b0778b46
I0307 10:28:25.217026 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:25.217031 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:25.217038 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:25.217228 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:25.713156 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:25.713175 7018 round_trippers.go:469] Request Headers:
I0307 10:28:25.713187 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:25.713197 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:25.715881 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:25.715901 7018 round_trippers.go:577] Response Headers:
I0307 10:28:25.715913 7018 round_trippers.go:580] Audit-Id: b458a53f-cebf-4dba-b1b0-795a83b24bef
I0307 10:28:25.715924 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:25.715933 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:25.715939 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:25.715946 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:25.715956 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:25 GMT
I0307 10:28:25.716134 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:25.716499 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:25.716508 7018 round_trippers.go:469] Request Headers:
I0307 10:28:25.716516 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:25.716523 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:25.717669 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:25.717677 7018 round_trippers.go:577] Response Headers:
I0307 10:28:25.717683 7018 round_trippers.go:580] Audit-Id: 1eb8ab80-758c-4e81-8dcb-159f98be89b6
I0307 10:28:25.717691 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:25.717698 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:25.717705 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:25.717711 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:25.717717 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:25 GMT
I0307 10:28:25.717847 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:25.718043 7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
I0307 10:28:26.211810 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:26.211826 7018 round_trippers.go:469] Request Headers:
I0307 10:28:26.211833 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:26.211854 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:26.217580 7018 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0307 10:28:26.217593 7018 round_trippers.go:577] Response Headers:
I0307 10:28:26.217599 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:26.217624 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:26 GMT
I0307 10:28:26.217634 7018 round_trippers.go:580] Audit-Id: 25844fb6-cd84-4dd3-af18-9f89ee6d5a04
I0307 10:28:26.217641 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:26.217646 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:26.217651 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:26.218222 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:26.218502 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:26.218509 7018 round_trippers.go:469] Request Headers:
I0307 10:28:26.218515 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:26.218520 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:26.223546 7018 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0307 10:28:26.223558 7018 round_trippers.go:577] Response Headers:
I0307 10:28:26.223563 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:26 GMT
I0307 10:28:26.223568 7018 round_trippers.go:580] Audit-Id: bf250b8a-6074-45b3-9f33-45ad42a6a343
I0307 10:28:26.223573 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:26.223578 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:26.223582 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:26.223587 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:26.224042 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:26.713218 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:26.713243 7018 round_trippers.go:469] Request Headers:
I0307 10:28:26.713255 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:26.713265 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:26.716102 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:26.716121 7018 round_trippers.go:577] Response Headers:
I0307 10:28:26.716129 7018 round_trippers.go:580] Audit-Id: 219d5f63-3a7c-44c7-8b51-2921f95c2710
I0307 10:28:26.716136 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:26.716144 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:26.716151 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:26.716157 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:26.716165 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:26 GMT
I0307 10:28:26.716247 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:26.716596 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:26.716604 7018 round_trippers.go:469] Request Headers:
I0307 10:28:26.716612 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:26.716619 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:26.718244 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:26.718252 7018 round_trippers.go:577] Response Headers:
I0307 10:28:26.718258 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:26.718264 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:26.718274 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:26.718280 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:26.718288 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:26 GMT
I0307 10:28:26.718293 7018 round_trippers.go:580] Audit-Id: ad769d45-1dbe-4f0f-bad4-953da8623939
I0307 10:28:26.718441 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:27.212704 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:27.212727 7018 round_trippers.go:469] Request Headers:
I0307 10:28:27.212739 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:27.212749 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:27.215311 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:27.215337 7018 round_trippers.go:577] Response Headers:
I0307 10:28:27.215345 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:27.215353 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:27.215361 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:27 GMT
I0307 10:28:27.215367 7018 round_trippers.go:580] Audit-Id: 36856e4f-a7e1-45d6-97ce-8f885ac8c841
I0307 10:28:27.215374 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:27.215381 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:27.215565 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:27.215939 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:27.215948 7018 round_trippers.go:469] Request Headers:
I0307 10:28:27.215956 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:27.215964 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:27.217347 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:27.217354 7018 round_trippers.go:577] Response Headers:
I0307 10:28:27.217362 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:27.217368 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:27 GMT
I0307 10:28:27.217374 7018 round_trippers.go:580] Audit-Id: d6676113-bd9a-4eaf-ba1b-019818744e42
I0307 10:28:27.217381 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:27.217389 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:27.217404 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:27.217556 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:27.711824 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:27.724865 7018 round_trippers.go:469] Request Headers:
I0307 10:28:27.724880 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:27.724887 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:27.726579 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:27.726589 7018 round_trippers.go:577] Response Headers:
I0307 10:28:27.726594 7018 round_trippers.go:580] Audit-Id: 0d01fa41-8246-4722-9399-93a5592f6b29
I0307 10:28:27.726599 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:27.726606 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:27.726613 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:27.726619 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:27.726624 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:27 GMT
I0307 10:28:27.726876 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:27.727175 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:27.727181 7018 round_trippers.go:469] Request Headers:
I0307 10:28:27.727187 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:27.727192 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:27.728314 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:27.728322 7018 round_trippers.go:577] Response Headers:
I0307 10:28:27.728334 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:27.728347 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:27.728353 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:27.728370 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:27 GMT
I0307 10:28:27.728379 7018 round_trippers.go:580] Audit-Id: 0e3e9ef9-ecac-45df-aee2-aff56bc03a97
I0307 10:28:27.728391 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:27.728478 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:27.728664 7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
I0307 10:28:28.212950 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:28.212969 7018 round_trippers.go:469] Request Headers:
I0307 10:28:28.212982 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:28.212992 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:28.216019 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:28.216035 7018 round_trippers.go:577] Response Headers:
I0307 10:28:28.216043 7018 round_trippers.go:580] Audit-Id: 24e3382f-877e-4bd3-9d01-53648e905133
I0307 10:28:28.216051 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:28.216057 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:28.216064 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:28.216072 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:28.216078 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:28 GMT
I0307 10:28:28.216218 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:28.216592 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:28.216601 7018 round_trippers.go:469] Request Headers:
I0307 10:28:28.216610 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:28.216617 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:28.218098 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:28.218109 7018 round_trippers.go:577] Response Headers:
I0307 10:28:28.218116 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:28 GMT
I0307 10:28:28.218121 7018 round_trippers.go:580] Audit-Id: ba13bf42-a23e-4b8b-b82d-f134c64fb02d
I0307 10:28:28.218133 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:28.218139 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:28.218144 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:28.218149 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:28.218380 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:28.713844 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:28.713872 7018 round_trippers.go:469] Request Headers:
I0307 10:28:28.713886 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:28.713897 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:28.717059 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:28.717075 7018 round_trippers.go:577] Response Headers:
I0307 10:28:28.717082 7018 round_trippers.go:580] Audit-Id: 2d17ebc7-34f0-4220-a01c-eba9dc18629b
I0307 10:28:28.717089 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:28.717096 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:28.717102 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:28.717109 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:28.717115 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:28 GMT
I0307 10:28:28.717206 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:28.717584 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:28.717593 7018 round_trippers.go:469] Request Headers:
I0307 10:28:28.717601 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:28.717609 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:28.718961 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:28.718971 7018 round_trippers.go:577] Response Headers:
I0307 10:28:28.718978 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:28.718982 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:28.718987 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:28.718992 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:28 GMT
I0307 10:28:28.718997 7018 round_trippers.go:580] Audit-Id: 1a95c19b-155c-4919-8f52-e4a21e53e43d
I0307 10:28:28.719002 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:28.719162 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:29.212285 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:29.212298 7018 round_trippers.go:469] Request Headers:
I0307 10:28:29.212305 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:29.212310 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:29.214049 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:29.214059 7018 round_trippers.go:577] Response Headers:
I0307 10:28:29.214065 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:29.214070 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:29.214075 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:29.214080 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:29 GMT
I0307 10:28:29.214087 7018 round_trippers.go:580] Audit-Id: 5902e368-f17f-4c82-9c7c-675d086888dd
I0307 10:28:29.214092 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:29.214228 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:29.214511 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:29.214517 7018 round_trippers.go:469] Request Headers:
I0307 10:28:29.214523 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:29.214529 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:29.215699 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:29.215709 7018 round_trippers.go:577] Response Headers:
I0307 10:28:29.215716 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:29.215723 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:29 GMT
I0307 10:28:29.215729 7018 round_trippers.go:580] Audit-Id: b6d6f5f7-09c3-4195-a4c1-845aef7ffc32
I0307 10:28:29.215734 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:29.215740 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:29.215747 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:29.215925 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:29.713052 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:29.713064 7018 round_trippers.go:469] Request Headers:
I0307 10:28:29.713070 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:29.713076 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:29.714443 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:29.714452 7018 round_trippers.go:577] Response Headers:
I0307 10:28:29.714457 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:29.714463 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:29.714468 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:29.714479 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:29 GMT
I0307 10:28:29.714484 7018 round_trippers.go:580] Audit-Id: 9c79de10-38b6-4cc5-8a5c-f518875339a0
I0307 10:28:29.714489 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:29.714549 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:29.714827 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:29.714833 7018 round_trippers.go:469] Request Headers:
I0307 10:28:29.714839 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:29.714844 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:29.723979 7018 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0307 10:28:29.723993 7018 round_trippers.go:577] Response Headers:
I0307 10:28:29.724011 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:29.724019 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:29.724028 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:29.724034 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:29 GMT
I0307 10:28:29.724040 7018 round_trippers.go:580] Audit-Id: 23a3f013-edd3-4bde-b9dc-3cdee57361b7
I0307 10:28:29.724046 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:29.724143 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.211801 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:30.211812 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.211819 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.211824 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.213958 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:30.213967 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.213972 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.213979 7018 round_trippers.go:580] Audit-Id: e3914bca-23b4-48cb-b3f3-c3e31ebe9b8e
I0307 10:28:30.213984 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.213989 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.213994 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.213999 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.219685 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
I0307 10:28:30.219986 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.219995 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.220004 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.220012 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.221717 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.221732 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.221741 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.221756 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.221762 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.221769 7018 round_trippers.go:580] Audit-Id: f3b83e3d-bec0-444f-bd00-ec3be70f6d10
I0307 10:28:30.221777 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.221783 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.221864 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.222060 7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
I0307 10:28:30.712597 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:30.712622 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.712717 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.712731 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.716221 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:30.716239 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.716247 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.716256 7018 round_trippers.go:580] Audit-Id: c7b16bdb-1c9a-42a3-b989-2ef728451887
I0307 10:28:30.716263 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.716270 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.716278 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.716284 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.716375 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6489 chars]
I0307 10:28:30.716777 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.716785 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.716793 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.716801 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.718436 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.718450 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.718457 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.718466 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.718473 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.718480 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.718485 7018 round_trippers.go:580] Audit-Id: 405256c2-a3b7-4450-9419-3e5f6172aabd
I0307 10:28:30.718491 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.718618 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.718803 7018 pod_ready.go:92] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:30.718812 7018 pod_ready.go:81] duration metric: took 7.010451765s waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.718825 7018 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.718853 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
I0307 10:28:30.719043 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.719125 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.719139 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.721072 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.721084 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.721090 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.721095 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.721100 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.721105 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.721110 7018 round_trippers.go:580] Audit-Id: ea8580ee-1e6e-4f3b-8474-356c1d7d09d5
I0307 10:28:30.721114 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.721227 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"1080","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6056 chars]
I0307 10:28:30.721443 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.721450 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.721456 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.721461 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.722677 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.722687 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.722699 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.722710 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.722719 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.722725 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.722731 7018 round_trippers.go:580] Audit-Id: 9a6b5445-3298-4c53-9f39-0cfd9f3d0951
I0307 10:28:30.722738 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.722826 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.723009 7018 pod_ready.go:92] pod "etcd-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:30.723015 7018 pod_ready.go:81] duration metric: took 4.185851ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.723025 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.723049 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
I0307 10:28:30.723053 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.723059 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.723068 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.725808 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:30.725819 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.725824 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.725830 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.725835 7018 round_trippers.go:580] Audit-Id: 27751b68-dbeb-4139-b048-aa37ba96ce0d
I0307 10:28:30.725840 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.725844 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.725850 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.725930 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"1143","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7591 chars]
I0307 10:28:30.726162 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.726168 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.726173 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.726179 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.727114 7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0307 10:28:30.727123 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.727129 7018 round_trippers.go:580] Audit-Id: 09ac9355-1c65-4420-8f52-155883618aa6
I0307 10:28:30.727134 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.727140 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.727145 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.727150 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.727155 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.727288 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.727470 7018 pod_ready.go:92] pod "kube-apiserver-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:30.727476 7018 pod_ready.go:81] duration metric: took 4.446202ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.727481 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.727505 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
I0307 10:28:30.727510 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.727516 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.727522 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.728648 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.728659 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.728665 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.728670 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.728674 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.728679 7018 round_trippers.go:580] Audit-Id: 559a8b88-70d9-4098-a5fd-ce69e6fc06be
I0307 10:28:30.728684 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.728688 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.728916 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1131","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7159 chars]
I0307 10:28:30.729139 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.729145 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.729151 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.729157 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.730563 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:30.730570 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.730575 7018 round_trippers.go:580] Audit-Id: 8efa58ee-7b42-4ba5-a878-ad10e7d3e33b
I0307 10:28:30.730579 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.730584 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.730588 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.730593 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.730599 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.730701 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.730866 7018 pod_ready.go:92] pod "kube-controller-manager-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:30.730872 7018 pod_ready.go:81] duration metric: took 3.385852ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.730877 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.730902 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
I0307 10:28:30.730906 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.730912 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.730918 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.731885 7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0307 10:28:30.731894 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.731900 7018 round_trippers.go:580] Audit-Id: ffc44502-d870-437e-9544-bf450ca2b814
I0307 10:28:30.731906 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.731914 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.731920 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.731925 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.731930 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.732036 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"1061","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
I0307 10:28:30.732243 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:30.732248 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.732255 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.732260 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.733218 7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0307 10:28:30.733226 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.733232 7018 round_trippers.go:580] Audit-Id: 3937160f-ce1c-4927-8fe0-6e7893d1567c
I0307 10:28:30.733237 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.733244 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.733248 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.733253 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.733258 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:30 GMT
I0307 10:28:30.733356 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:30.733519 7018 pod_ready.go:92] pod "kube-proxy-8qwhq" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:30.733525 7018 pod_ready.go:81] duration metric: took 2.642988ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.733531 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:30.912636 7018 request.go:622] Waited for 179.066998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:30.912685 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:30.912694 7018 round_trippers.go:469] Request Headers:
I0307 10:28:30.912778 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:30.912791 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:30.915495 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:30.915507 7018 round_trippers.go:577] Response Headers:
I0307 10:28:30.915515 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:30.915522 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:30.915530 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:30.915536 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:30.915544 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:31 GMT
I0307 10:28:30.915550 7018 round_trippers.go:580] Audit-Id: 3ae79f8d-1535-4d8e-a180-5f18227960da
I0307 10:28:30.915655 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"469","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
I0307 10:28:31.114599 7018 request.go:622] Waited for 198.634122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:31.114628 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:31.114633 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.114642 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.114649 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.116473 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:31.116483 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.116488 7018 round_trippers.go:580] Audit-Id: e955a99c-57ac-4ae0-a513-9afa809a5caf
I0307 10:28:31.116493 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.116498 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.116503 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.116509 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.116513 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:31 GMT
I0307 10:28:31.116688 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b","resourceVersion":"812","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4512 chars]
I0307 10:28:31.116864 7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:31.116870 7018 pod_ready.go:81] duration metric: took 383.333062ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:31.116876 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:31.314683 7018 request.go:622] Waited for 197.728848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:31.314736 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:31.314770 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.314788 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.314803 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.317976 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:31.317992 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.318000 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:31 GMT
I0307 10:28:31.318029 7018 round_trippers.go:580] Audit-Id: a357c92b-2320-4582-b9e7-f62d05a9d4e3
I0307 10:28:31.318042 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.318051 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.318057 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.318064 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.318199 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I0307 10:28:31.514054 7018 request.go:622] Waited for 195.505176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:31.514146 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:31.514242 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.514254 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.514267 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.517133 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:31.517148 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.517156 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.517163 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.517171 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.517178 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.517184 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:31 GMT
I0307 10:28:31.517191 7018 round_trippers.go:580] Audit-Id: 532579cf-d5cc-41c0-b38e-54a2f800d22f
I0307 10:28:31.517302 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1109","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4330 chars]
I0307 10:28:31.517527 7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:31.517534 7018 pod_ready.go:81] duration metric: took 400.651378ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:31.517542 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:31.713858 7018 request.go:622] Waited for 196.240525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:31.713912 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:31.713952 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.713969 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.713983 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.716855 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:31.716871 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.716879 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.716894 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:31 GMT
I0307 10:28:31.716902 7018 round_trippers.go:580] Audit-Id: 291b5d9b-3357-4be3-9d0c-89832cae8ad3
I0307 10:28:31.716910 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.716917 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.716924 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.717008 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1139","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4889 chars]
I0307 10:28:31.912715 7018 request.go:622] Waited for 195.420936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:31.912766 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:31.912775 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.912789 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.912852 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.915496 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:31.915515 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.915523 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:31.915532 7018 round_trippers.go:580] Audit-Id: ab49a22e-b0ca-4460-8af6-f31980cc83e0
I0307 10:28:31.915539 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.915547 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.915558 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.915565 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.915671 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:31.915930 7018 pod_ready.go:92] pod "kube-scheduler-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:31.915938 7018 pod_ready.go:81] duration metric: took 398.388063ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:31.915946 7018 pod_ready.go:38] duration metric: took 8.212399171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:31.915959 7018 api_server.go:51] waiting for apiserver process to appear ...
I0307 10:28:31.916021 7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 10:28:31.926000 7018 command_runner.go:130] > 1604
I0307 10:28:31.926101 7018 api_server.go:71] duration metric: took 14.339953362s to wait for apiserver process to appear ...
I0307 10:28:31.926109 7018 api_server.go:87] waiting for apiserver healthz status ...
I0307 10:28:31.926115 7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
I0307 10:28:31.929766 7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 200:
ok
I0307 10:28:31.929791 7018 round_trippers.go:463] GET https://192.168.64.12:8443/version
I0307 10:28:31.929796 7018 round_trippers.go:469] Request Headers:
I0307 10:28:31.929803 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:31.929809 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:31.930265 7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0307 10:28:31.930272 7018 round_trippers.go:577] Response Headers:
I0307 10:28:31.930277 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:31.930283 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:31.930291 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:31.930297 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:31.930302 7018 round_trippers.go:580] Content-Length: 263
I0307 10:28:31.930307 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:31.930313 7018 round_trippers.go:580] Audit-Id: 416b7f0f-553f-48b8-8633-6be8897b3ddf
I0307 10:28:31.930330 7018 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.2",
"gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
"gitTreeState": "clean",
"buildDate": "2023-02-22T13:32:22Z",
"goVersion": "go1.19.6",
"compiler": "gc",
"platform": "linux/amd64"
}
I0307 10:28:31.930354 7018 api_server.go:140] control plane version: v1.26.2
I0307 10:28:31.930360 7018 api_server.go:130] duration metric: took 4.24718ms to wait for apiserver health ...
I0307 10:28:31.930364 7018 system_pods.go:43] waiting for kube-system pods to appear ...
I0307 10:28:32.112716 7018 request.go:622] Waited for 182.311615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:32.112771 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:32.112780 7018 round_trippers.go:469] Request Headers:
I0307 10:28:32.112834 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:32.112848 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:32.116811 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:32.116841 7018 round_trippers.go:577] Response Headers:
I0307 10:28:32.116877 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:32.116904 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:32.116916 7018 round_trippers.go:580] Audit-Id: c5d1857d-a22f-42d9-aec9-08ad8e7331bd
I0307 10:28:32.116950 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:32.116966 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:32.116973 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:32.118187 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82836 chars]
I0307 10:28:32.119945 7018 system_pods.go:59] 12 kube-system pods found
I0307 10:28:32.119954 7018 system_pods.go:61] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
I0307 10:28:32.119958 7018 system_pods.go:61] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
I0307 10:28:32.119963 7018 system_pods.go:61] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
I0307 10:28:32.119967 7018 system_pods.go:61] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
I0307 10:28:32.119970 7018 system_pods.go:61] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
I0307 10:28:32.119975 7018 system_pods.go:61] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
I0307 10:28:32.119993 7018 system_pods.go:61] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running
I0307 10:28:32.120000 7018 system_pods.go:61] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
I0307 10:28:32.120004 7018 system_pods.go:61] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
I0307 10:28:32.120008 7018 system_pods.go:61] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
I0307 10:28:32.120011 7018 system_pods.go:61] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
I0307 10:28:32.120016 7018 system_pods.go:61] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
I0307 10:28:32.120019 7018 system_pods.go:74] duration metric: took 189.651129ms to wait for pod list to return data ...
I0307 10:28:32.120025 7018 default_sa.go:34] waiting for default service account to be created ...
I0307 10:28:32.313205 7018 request.go:622] Waited for 193.131438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
I0307 10:28:32.313251 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
I0307 10:28:32.313259 7018 round_trippers.go:469] Request Headers:
I0307 10:28:32.313271 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:32.313281 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:32.315756 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:32.315778 7018 round_trippers.go:577] Response Headers:
I0307 10:28:32.315809 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:32.315822 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:32.315830 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:32.315837 7018 round_trippers.go:580] Content-Length: 262
I0307 10:28:32.315843 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:32.315850 7018 round_trippers.go:580] Audit-Id: ac7a8c42-5ffa-402f-970f-d1d5a6d3058d
I0307 10:28:32.315857 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:32.315874 7018 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6e32b5cd-63bd-46a7-9ed5-ea842da6729c","resourceVersion":"325","creationTimestamp":"2023-03-07T18:18:42Z"}}]}
I0307 10:28:32.316001 7018 default_sa.go:45] found service account: "default"
I0307 10:28:32.316010 7018 default_sa.go:55] duration metric: took 195.9795ms for default service account to be created ...
I0307 10:28:32.316018 7018 system_pods.go:116] waiting for k8s-apps to be running ...
I0307 10:28:32.513632 7018 request.go:622] Waited for 197.482521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:32.513683 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:32.513691 7018 round_trippers.go:469] Request Headers:
I0307 10:28:32.513704 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:32.513718 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:32.517123 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:32.517133 7018 round_trippers.go:577] Response Headers:
I0307 10:28:32.517139 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:32.517144 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:32.517148 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:32.517154 7018 round_trippers.go:580] Audit-Id: c5f53d8f-ee73-49a6-be78-6ca8c2200a8e
I0307 10:28:32.517161 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:32.517168 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:32.517894 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82836 chars]
I0307 10:28:32.519632 7018 system_pods.go:86] 12 kube-system pods found
I0307 10:28:32.519641 7018 system_pods.go:89] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
I0307 10:28:32.519650 7018 system_pods.go:89] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
I0307 10:28:32.519654 7018 system_pods.go:89] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
I0307 10:28:32.519659 7018 system_pods.go:89] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
I0307 10:28:32.519664 7018 system_pods.go:89] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
I0307 10:28:32.519668 7018 system_pods.go:89] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
I0307 10:28:32.519671 7018 system_pods.go:89] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running
I0307 10:28:32.519675 7018 system_pods.go:89] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
I0307 10:28:32.519679 7018 system_pods.go:89] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
I0307 10:28:32.519683 7018 system_pods.go:89] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
I0307 10:28:32.519686 7018 system_pods.go:89] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
I0307 10:28:32.519690 7018 system_pods.go:89] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
I0307 10:28:32.519694 7018 system_pods.go:126] duration metric: took 203.671188ms to wait for k8s-apps to be running ...
I0307 10:28:32.519699 7018 system_svc.go:44] waiting for kubelet service to be running ....
I0307 10:28:32.519751 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 10:28:32.528776 7018 system_svc.go:56] duration metric: took 9.073723ms WaitForService to wait for kubelet.
I0307 10:28:32.528791 7018 kubeadm.go:578] duration metric: took 14.942639871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0307 10:28:32.528801 7018 node_conditions.go:102] verifying NodePressure condition ...
I0307 10:28:32.714684 7018 request.go:622] Waited for 185.826429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes
I0307 10:28:32.725835 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
I0307 10:28:32.725851 7018 round_trippers.go:469] Request Headers:
I0307 10:28:32.725863 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:32.725878 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:32.728446 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:32.728460 7018 round_trippers.go:577] Response Headers:
I0307 10:28:32.728468 7018 round_trippers.go:580] Audit-Id: baedd684-4a38-47c3-8b1a-5bac961a5fbc
I0307 10:28:32.728477 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:32.728490 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:32.728500 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:32.728507 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:32.728514 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:32 GMT
I0307 10:28:32.728762 7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16210 chars]
I0307 10:28:32.729257 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:32.729266 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:32.729274 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:32.729278 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:32.729282 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:28:32.729286 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:28:32.729289 7018 node_conditions.go:105] duration metric: took 200.482518ms to run NodePressure ...
I0307 10:28:32.729297 7018 start.go:228] waiting for startup goroutines ...
I0307 10:28:32.729302 7018 start.go:233] waiting for cluster config update ...
I0307 10:28:32.729308 7018 start.go:242] writing updated cluster config ...
I0307 10:28:32.729786 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:28:32.729851 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:28:32.751369 7018 out.go:177] * Starting worker node multinode-260000-m02 in cluster multinode-260000
I0307 10:28:32.794328 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:28:32.794413 7018 cache.go:57] Caching tarball of preloaded images
I0307 10:28:32.794583 7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0307 10:28:32.794601 7018 cache.go:60] Finished verifying existence of preloaded tar for v1.26.2 on docker
I0307 10:28:32.794723 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:28:32.795675 7018 cache.go:193] Successfully downloaded all kic artifacts
I0307 10:28:32.795702 7018 start.go:364] acquiring machines lock for multinode-260000-m02: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 10:28:32.795787 7018 start.go:368] acquired machines lock for "multinode-260000-m02" in 65.198µs
I0307 10:28:32.795817 7018 start.go:96] Skipping create...Using existing machine configuration
I0307 10:28:32.795824 7018 fix.go:55] fixHost starting: m02
I0307 10:28:32.796234 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:28:32.796271 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:28:32.804078 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51665
I0307 10:28:32.804430 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:28:32.804833 7018 main.go:141] libmachine: Using API Version 1
I0307 10:28:32.804855 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:28:32.805065 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:28:32.805179 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:32.805269 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetState
I0307 10:28:32.805361 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:28:32.805423 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid from json: 6295
I0307 10:28:32.806220 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid 6295 missing from process table
I0307 10:28:32.806256 7018 fix.go:103] recreateIfNeeded on multinode-260000-m02: state=Stopped err=<nil>
I0307 10:28:32.806268 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
W0307 10:28:32.806350 7018 fix.go:129] unexpected machine state, will restart: <nil>
I0307 10:28:32.827377 7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000-m02" ...
I0307 10:28:32.869734 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .Start
I0307 10:28:32.869997 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:28:32.870091 7018 main.go:141] libmachine: (multinode-260000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid
I0307 10:28:32.871656 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid 6295 missing from process table
I0307 10:28:32.871680 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | pid 6295 is in state "Stopped"
I0307 10:28:32.871712 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid...
I0307 10:28:32.871965 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Using UUID 835471be-bd14-11ed-9c3c-149d997fca88
I0307 10:28:32.899206 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Generated MAC ba:65:3c:6f:8d:dc
I0307 10:28:32.899232 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
I0307 10:28:32.899404 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"835471be-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000395b00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
I0307 10:28:32.899444 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"835471be-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000395b00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
I0307 10:28:32.899480 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "835471be-bd14-11ed-9c3c-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/multinode-260000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage,/Users/j
enkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
I0307 10:28:32.899519 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 835471be-bd14-11ed-9c3c-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/multinode-260000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/mult
inode-260000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
I0307 10:28:32.899533 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0307 10:28:32.900716 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Pid is 7098
I0307 10:28:32.901058 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Attempt 0
I0307 10:28:32.901070 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:28:32.901159 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid from json: 7098
I0307 10:28:32.902759 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Searching for ba:65:3c:6f:8d:dc in /var/db/dhcpd_leases ...
I0307 10:28:32.902821 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
I0307 10:28:32.902837 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d38e}
I0307 10:28:32.902848 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
I0307 10:28:32.902856 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
I0307 10:28:32.902881 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d194}
I0307 10:28:32.902892 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Found match: ba:65:3c:6f:8d:dc
I0307 10:28:32.902900 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | IP: 192.168.64.13
I0307 10:28:32.902925 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetConfigRaw
I0307 10:28:32.903499 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
I0307 10:28:32.903686 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:28:32.904005 7018 machine.go:88] provisioning docker machine ...
I0307 10:28:32.904016 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:32.904127 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
I0307 10:28:32.904238 7018 buildroot.go:166] provisioning hostname "multinode-260000-m02"
I0307 10:28:32.904248 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
I0307 10:28:32.904335 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:32.904423 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:32.904506 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:32.904579 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:32.904654 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:32.904766 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:32.905083 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:32.905099 7018 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-260000-m02 && echo "multinode-260000-m02" | sudo tee /etc/hostname
I0307 10:28:32.907073 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0307 10:28:32.914845 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0307 10:28:32.915562 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:28:32.915575 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:28:32.915583 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:28:32.915590 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:28:33.270333 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0307 10:28:33.270350 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0307 10:28:33.374324 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:28:33.374345 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:28:33.374362 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:28:33.374375 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:28:33.375209 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0307 10:28:33.375231 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0307 10:28:37.885819 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0307 10:28:37.885892 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0307 10:28:37.885906 7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0307 10:28:43.994445 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000-m02
I0307 10:28:43.994460 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:43.994617 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:43.994725 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:43.994819 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:43.994903 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:43.995031 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:43.995375 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:43.995387 7018 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-260000-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-260000-m02' | sudo tee -a /etc/hosts;
fi
fi
I0307 10:28:44.074363 7018 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 10:28:44.074384 7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
I0307 10:28:44.074392 7018 buildroot.go:174] setting up certificates
I0307 10:28:44.074399 7018 provision.go:83] configureAuth start
I0307 10:28:44.074407 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
I0307 10:28:44.074531 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
I0307 10:28:44.074611 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:44.074689 7018 provision.go:138] copyHostCerts
I0307 10:28:44.074731 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:28:44.074787 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
I0307 10:28:44.074794 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:28:44.074898 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
I0307 10:28:44.075070 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:28:44.075104 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
I0307 10:28:44.075109 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:28:44.075176 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
I0307 10:28:44.075308 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:28:44.075341 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
I0307 10:28:44.075345 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:28:44.075412 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
I0307 10:28:44.075534 7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000-m02 san=[192.168.64.13 192.168.64.13 localhost 127.0.0.1 minikube multinode-260000-m02]
I0307 10:28:44.229773 7018 provision.go:172] copyRemoteCerts
I0307 10:28:44.229826 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 10:28:44.229842 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:44.229985 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:44.230082 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.230172 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:44.230271 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
I0307 10:28:44.272044 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0307 10:28:44.272115 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0307 10:28:44.288148 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
I0307 10:28:44.288225 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0307 10:28:44.303969 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0307 10:28:44.304037 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0307 10:28:44.319850 7018 provision.go:86] duration metric: configureAuth took 245.441923ms
I0307 10:28:44.319862 7018 buildroot.go:189] setting minikube options for container-runtime
I0307 10:28:44.320030 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:28:44.320045 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:44.320174 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:44.320276 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:44.320360 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.320463 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.320545 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:44.320659 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:44.320957 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:44.320966 7018 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0307 10:28:44.395776 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0307 10:28:44.395788 7018 buildroot.go:70] root file system type: tmpfs
I0307 10:28:44.395864 7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0307 10:28:44.395879 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:44.396009 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:44.396095 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.396175 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.396263 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:44.396386 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:44.396702 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:44.396747 7018 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.64.12"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0307 10:28:44.478924 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.64.12
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0307 10:28:44.478942 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:44.479070 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:44.479153 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.479233 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:44.479316 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:44.479441 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:44.479748 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:44.479760 7018 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0307 10:28:45.040521 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0307 10:28:45.040534 7018 machine.go:91] provisioned docker machine in 12.136465556s
I0307 10:28:45.040540 7018 start.go:300] post-start starting for "multinode-260000-m02" (driver="hyperkit")
I0307 10:28:45.040546 7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 10:28:45.040555 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:45.040748 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 10:28:45.040760 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:45.040882 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:45.040972 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:45.041059 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:45.041157 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
I0307 10:28:45.087397 7018 ssh_runner.go:195] Run: cat /etc/os-release
I0307 10:28:45.091149 7018 command_runner.go:130] > NAME=Buildroot
I0307 10:28:45.091158 7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
I0307 10:28:45.091162 7018 command_runner.go:130] > ID=buildroot
I0307 10:28:45.091166 7018 command_runner.go:130] > VERSION_ID=2021.02.12
I0307 10:28:45.091170 7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0307 10:28:45.091259 7018 info.go:137] Remote host: Buildroot 2021.02.12
I0307 10:28:45.091268 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
I0307 10:28:45.091351 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
I0307 10:28:45.091498 7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
I0307 10:28:45.091504 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
I0307 10:28:45.091663 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0307 10:28:45.100582 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
I0307 10:28:45.126802 7018 start.go:303] post-start completed in 86.252226ms
I0307 10:28:45.126814 7018 fix.go:57] fixHost completed within 12.330934005s
I0307 10:28:45.126826 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:45.126964 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:45.127056 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:45.127154 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:45.127232 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:45.127364 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:28:45.127672 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.13 22 <nil> <nil>}
I0307 10:28:45.127680 7018 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0307 10:28:45.202858 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213725.334485743
I0307 10:28:45.202870 7018 fix.go:207] guest clock: 1678213725.334485743
I0307 10:28:45.202880 7018 fix.go:220] Guest: 2023-03-07 10:28:45.334485743 -0800 PST Remote: 2023-03-07 10:28:45.126816 -0800 PST m=+87.461319305 (delta=207.669743ms)
I0307 10:28:45.202890 7018 fix.go:191] guest clock delta is within tolerance: 207.669743ms
I0307 10:28:45.202894 7018 start.go:83] releasing machines lock for "multinode-260000-m02", held for 12.407039272s
I0307 10:28:45.202911 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:45.203045 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
I0307 10:28:45.229173 7018 out.go:177] * Found network options:
I0307 10:28:45.249904 7018 out.go:177] - NO_PROXY=192.168.64.12
W0307 10:28:45.271748 7018 proxy.go:119] fail to check proxy env: Error ip not in block
I0307 10:28:45.271793 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:45.272543 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:45.272757 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
I0307 10:28:45.272892 7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0307 10:28:45.272940 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
W0307 10:28:45.273042 7018 proxy.go:119] fail to check proxy env: Error ip not in block
I0307 10:28:45.273135 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:45.273147 7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0307 10:28:45.273165 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
I0307 10:28:45.273342 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:45.273376 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
I0307 10:28:45.273607 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
I0307 10:28:45.273659 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:45.273827 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
I0307 10:28:45.273861 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
I0307 10:28:45.274044 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
I0307 10:28:45.313860 7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0307 10:28:45.314024 7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0307 10:28:45.314083 7018 ssh_runner.go:195] Run: which cri-dockerd
I0307 10:28:45.353726 7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0307 10:28:45.354872 7018 command_runner.go:130] > /usr/bin/cri-dockerd
I0307 10:28:45.355027 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0307 10:28:45.362451 7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0307 10:28:45.373398 7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 10:28:45.384177 7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0307 10:28:45.384307 7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0307 10:28:45.384316 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:28:45.384403 7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 10:28:45.401772 7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
I0307 10:28:45.401790 7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
I0307 10:28:45.401795 7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
I0307 10:28:45.401801 7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
I0307 10:28:45.401805 7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
I0307 10:28:45.401809 7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0307 10:28:45.401813 7018 command_runner.go:130] > registry.k8s.io/pause:3.9
I0307 10:28:45.401818 7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0307 10:28:45.401823 7018 command_runner.go:130] > registry.k8s.io/pause:3.6
I0307 10:28:45.401828 7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0307 10:28:45.401832 7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
I0307 10:28:45.402825 7018 docker.go:630] Got preloaded images: -- stdout --
kindest/kindnetd:v20230227-15197099
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0307 10:28:45.402834 7018 docker.go:560] Images already preloaded, skipping extraction
I0307 10:28:45.402840 7018 start.go:485] detecting cgroup driver to use...
I0307 10:28:45.402914 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:28:45.415287 7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0307 10:28:45.415302 7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0307 10:28:45.415537 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0307 10:28:45.422829 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 10:28:45.429702 7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0307 10:28:45.429750 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0307 10:28:45.436708 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:28:45.443666 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 10:28:45.450827 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:28:45.457881 7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 10:28:45.464910 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 10:28:45.471731 7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 10:28:45.477787 7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0307 10:28:45.477987 7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 10:28:45.484272 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:28:45.566893 7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 10:28:45.578247 7018 start.go:485] detecting cgroup driver to use...
I0307 10:28:45.578332 7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0307 10:28:45.587719 7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0307 10:28:45.588048 7018 command_runner.go:130] > [Unit]
I0307 10:28:45.588056 7018 command_runner.go:130] > Description=Docker Application Container Engine
I0307 10:28:45.588070 7018 command_runner.go:130] > Documentation=https://docs.docker.com
I0307 10:28:45.588078 7018 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0307 10:28:45.588085 7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0307 10:28:45.588091 7018 command_runner.go:130] > StartLimitBurst=3
I0307 10:28:45.588111 7018 command_runner.go:130] > StartLimitIntervalSec=60
I0307 10:28:45.588119 7018 command_runner.go:130] > [Service]
I0307 10:28:45.588126 7018 command_runner.go:130] > Type=notify
I0307 10:28:45.588130 7018 command_runner.go:130] > Restart=on-failure
I0307 10:28:45.588134 7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12
I0307 10:28:45.588141 7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0307 10:28:45.588148 7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0307 10:28:45.588153 7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0307 10:28:45.588159 7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0307 10:28:45.588164 7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0307 10:28:45.588170 7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0307 10:28:45.588176 7018 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0307 10:28:45.588189 7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0307 10:28:45.588195 7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0307 10:28:45.588199 7018 command_runner.go:130] > ExecStart=
I0307 10:28:45.588218 7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
I0307 10:28:45.588223 7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0307 10:28:45.588228 7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0307 10:28:45.588234 7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0307 10:28:45.588238 7018 command_runner.go:130] > LimitNOFILE=infinity
I0307 10:28:45.588247 7018 command_runner.go:130] > LimitNPROC=infinity
I0307 10:28:45.588253 7018 command_runner.go:130] > LimitCORE=infinity
I0307 10:28:45.588259 7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0307 10:28:45.588263 7018 command_runner.go:130] > # Only systemd 226 and above support this version.
I0307 10:28:45.588267 7018 command_runner.go:130] > TasksMax=infinity
I0307 10:28:45.588270 7018 command_runner.go:130] > TimeoutStartSec=0
I0307 10:28:45.588276 7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0307 10:28:45.588279 7018 command_runner.go:130] > Delegate=yes
I0307 10:28:45.588284 7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0307 10:28:45.588294 7018 command_runner.go:130] > KillMode=process
I0307 10:28:45.588298 7018 command_runner.go:130] > [Install]
I0307 10:28:45.588302 7018 command_runner.go:130] > WantedBy=multi-user.target
I0307 10:28:45.588380 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:28:45.599940 7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0307 10:28:45.612861 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:28:45.622327 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:28:45.630580 7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0307 10:28:45.653722 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:28:45.662024 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:28:45.674917 7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:28:45.674931 7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:28:45.674988 7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0307 10:28:45.756263 7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0307 10:28:45.846497 7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0307 10:28:45.846514 7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0307 10:28:45.858511 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:28:45.944748 7018 ssh_runner.go:195] Run: sudo systemctl restart docker
I0307 10:28:47.255144 7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.310371403s)
I0307 10:28:47.255214 7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 10:28:47.335677 7018 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0307 10:28:47.417454 7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 10:28:47.513228 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:28:47.598471 7018 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0307 10:28:47.611967 7018 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0307 10:28:47.612060 7018 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0307 10:28:47.616814 7018 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0307 10:28:47.616826 7018 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0307 10:28:47.616831 7018 command_runner.go:130] > Device: 16h/22d Inode: 852 Links: 1
I0307 10:28:47.616837 7018 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0307 10:28:47.616851 7018 command_runner.go:130] > Access: 2023-03-07 18:28:47.742167434 +0000
I0307 10:28:47.616856 7018 command_runner.go:130] > Modify: 2023-03-07 18:28:47.742167434 +0000
I0307 10:28:47.616860 7018 command_runner.go:130] > Change: 2023-03-07 18:28:47.744167434 +0000
I0307 10:28:47.616865 7018 command_runner.go:130] > Birth: -
I0307 10:28:47.617043 7018 start.go:553] Will wait 60s for crictl version
I0307 10:28:47.617089 7018 ssh_runner.go:195] Run: which crictl
I0307 10:28:47.619815 7018 command_runner.go:130] > /usr/bin/crictl
I0307 10:28:47.619873 7018 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0307 10:28:47.691285 7018 command_runner.go:130] > Version: 0.1.0
I0307 10:28:47.691297 7018 command_runner.go:130] > RuntimeName: docker
I0307 10:28:47.691301 7018 command_runner.go:130] > RuntimeVersion: 20.10.23
I0307 10:28:47.691305 7018 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0307 10:28:47.692228 7018 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0307 10:28:47.692301 7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 10:28:47.711035 7018 command_runner.go:130] > 20.10.23
I0307 10:28:47.728475 7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 10:28:47.749259 7018 command_runner.go:130] > 20.10.23
I0307 10:28:47.770120 7018 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
I0307 10:28:47.813210 7018 out.go:177] - env NO_PROXY=192.168.64.12
I0307 10:28:47.835385 7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
I0307 10:28:47.835775 7018 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0307 10:28:47.840292 7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 10:28:47.848646 7018 certs.go:56] Setting up /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000 for IP: 192.168.64.13
I0307 10:28:47.848666 7018 certs.go:186] acquiring lock for shared ca certs: {Name:mk21aa92235e3b083ba3cf4a52527e5734aca22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 10:28:47.848814 7018 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key
I0307 10:28:47.848878 7018 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key
I0307 10:28:47.848891 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0307 10:28:47.848915 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0307 10:28:47.848940 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0307 10:28:47.848960 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0307 10:28:47.849045 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem (1338 bytes)
W0307 10:28:47.849088 7018 certs.go:397] ignoring /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903_empty.pem, impossibly tiny 0 bytes
I0307 10:28:47.849100 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem (1675 bytes)
I0307 10:28:47.849141 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem (1082 bytes)
I0307 10:28:47.849185 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem (1123 bytes)
I0307 10:28:47.849224 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem (1675 bytes)
I0307 10:28:47.849299 7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem (1708 bytes)
I0307 10:28:47.849342 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0307 10:28:47.849367 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem -> /usr/share/ca-certificates/3903.pem
I0307 10:28:47.849386 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /usr/share/ca-certificates/39032.pem
I0307 10:28:47.849662 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0307 10:28:47.865455 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0307 10:28:47.881052 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0307 10:28:47.896926 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0307 10:28:47.912741 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0307 10:28:47.928528 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem --> /usr/share/ca-certificates/3903.pem (1338 bytes)
I0307 10:28:47.945013 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /usr/share/ca-certificates/39032.pem (1708 bytes)
I0307 10:28:47.960635 7018 ssh_runner.go:195] Run: openssl version
I0307 10:28:47.964021 7018 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0307 10:28:47.964272 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0307 10:28:47.971316 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0307 10:28:47.974134 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 10:28:47.974290 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 10:28:47.974333 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0307 10:28:47.977654 7018 command_runner.go:130] > b5213941
I0307 10:28:47.977920 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0307 10:28:47.984887 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3903.pem && ln -fs /usr/share/ca-certificates/3903.pem /etc/ssl/certs/3903.pem"
I0307 10:28:47.992249 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3903.pem
I0307 10:28:47.995266 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/3903.pem
I0307 10:28:47.995458 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/3903.pem
I0307 10:28:47.995499 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3903.pem
I0307 10:28:47.998865 7018 command_runner.go:130] > 51391683
I0307 10:28:47.999120 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3903.pem /etc/ssl/certs/51391683.0"
I0307 10:28:48.006141 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/39032.pem && ln -fs /usr/share/ca-certificates/39032.pem /etc/ssl/certs/39032.pem"
I0307 10:28:48.013240 7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/39032.pem
I0307 10:28:48.016074 7018 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/39032.pem
I0307 10:28:48.016260 7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/39032.pem
I0307 10:28:48.016294 7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/39032.pem
I0307 10:28:48.019631 7018 command_runner.go:130] > 3ec20f2e
I0307 10:28:48.019880 7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/39032.pem /etc/ssl/certs/3ec20f2e.0"
I0307 10:28:48.026902 7018 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0307 10:28:48.048324 7018 command_runner.go:130] > cgroupfs
I0307 10:28:48.048980 7018 cni.go:84] Creating CNI manager for ""
I0307 10:28:48.048990 7018 cni.go:136] 3 nodes found, recommending kindnet
I0307 10:28:48.048997 7018 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0307 10:28:48.049008 7018 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.13 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-260000 NodeName:multinode-260000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0307 10:28:48.049099 7018 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.13
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-260000-m02"
kubeletExtraArgs:
node-ip: 192.168.64.13
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.12"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0307 10:28:48.049134 7018 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-260000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.13
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0307 10:28:48.049192 7018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0307 10:28:48.055441 7018 command_runner.go:130] > kubeadm
I0307 10:28:48.055448 7018 command_runner.go:130] > kubectl
I0307 10:28:48.055454 7018 command_runner.go:130] > kubelet
I0307 10:28:48.055533 7018 binaries.go:44] Found k8s binaries, skipping transfer
I0307 10:28:48.055575 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0307 10:28:48.061804 7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (453 bytes)
I0307 10:28:48.072809 7018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0307 10:28:48.083885 7018 ssh_runner.go:195] Run: grep 192.168.64.12 control-plane.minikube.internal$ /etc/hosts
I0307 10:28:48.086255 7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.12 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 10:28:48.093971 7018 host.go:66] Checking if "multinode-260000" exists ...
I0307 10:28:48.094151 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:28:48.094253 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:28:48.094274 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:28:48.101209 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51684
I0307 10:28:48.101550 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:28:48.101900 7018 main.go:141] libmachine: Using API Version 1
I0307 10:28:48.101916 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:28:48.102150 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:28:48.102258 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:28:48.102341 7018 start.go:301] JoinCluster: &{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP:}
I0307 10:28:48.102433 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm token create --print-join-command --ttl=0"
I0307 10:28:48.102443 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:28:48.102521 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:28:48.102622 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:28:48.102707 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:28:48.102782 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:28:48.189788 7018 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af
I0307 10:28:48.189814 7018 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0307 10:28:48.189833 7018 host.go:66] Checking if "multinode-260000" exists ...
I0307 10:28:48.190161 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:28:48.190186 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:28:48.196916 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51687
I0307 10:28:48.197249 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:28:48.197612 7018 main.go:141] libmachine: Using API Version 1
I0307 10:28:48.197624 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:28:48.197818 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:28:48.197901 7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
I0307 10:28:48.198033 7018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl drain multinode-260000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
I0307 10:28:48.198050 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
I0307 10:28:48.198133 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
I0307 10:28:48.198209 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
I0307 10:28:48.198294 7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
I0307 10:28:48.198376 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
I0307 10:28:48.295688 7018 command_runner.go:130] > node/multinode-260000-m02 cordoned
I0307 10:28:51.318733 7018 command_runner.go:130] > pod "busybox-6b86dd6d48-dmrds" has DeletionTimestamp older than 1 seconds, skipping
I0307 10:28:51.318748 7018 command_runner.go:130] > node/multinode-260000-m02 drained
I0307 10:28:51.319712 7018 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
I0307 10:28:51.319724 7018 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-z6kqp, kube-system/kube-proxy-pxshj
I0307 10:28:51.319743 7018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl drain multinode-260000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.121678108s)
I0307 10:28:51.319753 7018 node.go:109] successfully drained node "m02"
I0307 10:28:51.320044 7018 loader.go:373] Config loaded from file: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:28:51.320243 7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 10:28:51.320537 7018 request.go:1171] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
I0307 10:28:51.320569 7018 round_trippers.go:463] DELETE https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:51.320574 7018 round_trippers.go:469] Request Headers:
I0307 10:28:51.320580 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:51.320586 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:51.320592 7018 round_trippers.go:473] Content-Type: application/json
I0307 10:28:51.323598 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:51.323609 7018 round_trippers.go:577] Response Headers:
I0307 10:28:51.323615 7018 round_trippers.go:580] Audit-Id: d4c330be-b2e7-4781-aecc-cf162ed512f1
I0307 10:28:51.323620 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:51.323625 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:51.323630 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:51.323636 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:51.323643 7018 round_trippers.go:580] Content-Length: 171
I0307 10:28:51.323649 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:51 GMT
I0307 10:28:51.323663 7018 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-260000-m02","kind":"nodes","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b"}}
I0307 10:28:51.323690 7018 node.go:125] successfully deleted node "m02"
I0307 10:28:51.323697 7018 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0307 10:28:51.323715 7018 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0307 10:28:51.323731 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-260000-m02"
I0307 10:28:51.374604 7018 command_runner.go:130] ! W0307 18:28:51.510767 1198 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0307 10:28:51.505076 7018 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0307 10:28:53.147207 7018 command_runner.go:130] > [preflight] Running pre-flight checks
I0307 10:28:53.147229 7018 command_runner.go:130] > [preflight] Reading configuration from the cluster...
I0307 10:28:53.147240 7018 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0307 10:28:53.147249 7018 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0307 10:28:53.147258 7018 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0307 10:28:53.147266 7018 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0307 10:28:53.147275 7018 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0307 10:28:53.147285 7018 command_runner.go:130] > This node has joined the cluster:
I0307 10:28:53.147294 7018 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
I0307 10:28:53.147304 7018 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
I0307 10:28:53.147313 7018 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
I0307 10:28:53.147327 7018 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-260000-m02": (1.823577721s)
I0307 10:28:53.147343 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0307 10:28:53.256139 7018 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
I0307 10:28:53.347575 7018 start.go:303] JoinCluster complete in 5.245201975s
I0307 10:28:53.347588 7018 cni.go:84] Creating CNI manager for ""
I0307 10:28:53.347594 7018 cni.go:136] 3 nodes found, recommending kindnet
I0307 10:28:53.347676 7018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0307 10:28:53.350863 7018 command_runner.go:130] > File: /opt/cni/bin/portmap
I0307 10:28:53.350874 7018 command_runner.go:130] > Size: 2798344 Blocks: 5472 IO Block: 4096 regular file
I0307 10:28:53.350882 7018 command_runner.go:130] > Device: 11h/17d Inode: 3542 Links: 1
I0307 10:28:53.350888 7018 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0307 10:28:53.350895 7018 command_runner.go:130] > Access: 2023-03-07 18:27:25.800133630 +0000
I0307 10:28:53.350899 7018 command_runner.go:130] > Modify: 2023-02-24 23:58:49.000000000 +0000
I0307 10:28:53.350904 7018 command_runner.go:130] > Change: 2023-03-07 18:27:24.520133706 +0000
I0307 10:28:53.350907 7018 command_runner.go:130] > Birth: -
I0307 10:28:53.350976 7018 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
I0307 10:28:53.350986 7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0307 10:28:53.365774 7018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0307 10:28:53.573328 7018 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0307 10:28:53.576007 7018 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0307 10:28:53.577626 7018 command_runner.go:130] > serviceaccount/kindnet unchanged
I0307 10:28:53.586569 7018 command_runner.go:130] > daemonset.apps/kindnet configured
I0307 10:28:53.588317 7018 loader.go:373] Config loaded from file: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:28:53.588503 7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 10:28:53.588731 7018 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0307 10:28:53.588737 7018 round_trippers.go:469] Request Headers:
I0307 10:28:53.588744 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:53.588750 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:53.590037 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:53.590045 7018 round_trippers.go:577] Response Headers:
I0307 10:28:53.590053 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:53.590058 7018 round_trippers.go:580] Content-Length: 292
I0307 10:28:53.590065 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:53 GMT
I0307 10:28:53.590074 7018 round_trippers.go:580] Audit-Id: 09b51ea0-529c-4d47-a052-cef6398d810c
I0307 10:28:53.590096 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:53.590105 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:53.590110 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:53.590121 7018 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9058bb7-5525-4245-a92a-3b0f0144c5d4","resourceVersion":"1155","creationTimestamp":"2023-03-07T18:18:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0307 10:28:53.590164 7018 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-260000" context rescaled to 1 replicas
I0307 10:28:53.590178 7018 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0307 10:28:53.633568 7018 out.go:177] * Verifying Kubernetes components...
I0307 10:28:53.691468 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 10:28:53.703497 7018 loader.go:373] Config loaded from file: /Users/jenkins/minikube-integration/15985-3430/kubeconfig
I0307 10:28:53.703698 7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 10:28:53.703918 7018 node_ready.go:35] waiting up to 6m0s for node "multinode-260000-m02" to be "Ready" ...
I0307 10:28:53.703963 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:53.703968 7018 round_trippers.go:469] Request Headers:
I0307 10:28:53.703974 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:53.703981 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:53.705420 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:53.705433 7018 round_trippers.go:577] Response Headers:
I0307 10:28:53.705439 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:53.705445 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:53 GMT
I0307 10:28:53.705455 7018 round_trippers.go:580] Audit-Id: e2d373c1-190f-45e0-b9cf-3d8d054fb1e3
I0307 10:28:53.705460 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:53.705465 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:53.705470 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:53.705557 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:54.205959 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:54.205976 7018 round_trippers.go:469] Request Headers:
I0307 10:28:54.205988 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:54.205995 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:54.208023 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:54.208036 7018 round_trippers.go:577] Response Headers:
I0307 10:28:54.208042 7018 round_trippers.go:580] Audit-Id: 162bfd38-128d-4c94-8620-4dd73b77dd1a
I0307 10:28:54.208050 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:54.208055 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:54.208065 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:54.208073 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:54.208080 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:54 GMT
I0307 10:28:54.208268 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:54.706066 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:54.706077 7018 round_trippers.go:469] Request Headers:
I0307 10:28:54.706084 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:54.706089 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:54.708076 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:54.708088 7018 round_trippers.go:577] Response Headers:
I0307 10:28:54.708095 7018 round_trippers.go:580] Audit-Id: dd80323e-e17e-4577-b133-2911fcce9fc1
I0307 10:28:54.708100 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:54.708105 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:54.708110 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:54.708115 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:54.708120 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:54 GMT
I0307 10:28:54.708207 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:55.206158 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:55.206172 7018 round_trippers.go:469] Request Headers:
I0307 10:28:55.206179 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:55.206184 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:55.207805 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:55.207815 7018 round_trippers.go:577] Response Headers:
I0307 10:28:55.207820 7018 round_trippers.go:580] Audit-Id: 9200c148-32d8-4985-98ec-72d4b636ae7e
I0307 10:28:55.207825 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:55.207831 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:55.207835 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:55.207840 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:55.207845 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:55 GMT
I0307 10:28:55.207923 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:55.706104 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:55.706119 7018 round_trippers.go:469] Request Headers:
I0307 10:28:55.706125 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:55.706131 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:55.707769 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:55.707783 7018 round_trippers.go:577] Response Headers:
I0307 10:28:55.707791 7018 round_trippers.go:580] Audit-Id: 0773193b-a44b-4173-a89e-1b4397280289
I0307 10:28:55.707797 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:55.707803 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:55.707808 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:55.707813 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:55.707818 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:55 GMT
I0307 10:28:55.707892 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:55.708076 7018 node_ready.go:58] node "multinode-260000-m02" has status "Ready":"False"
I0307 10:28:56.205958 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:56.205974 7018 round_trippers.go:469] Request Headers:
I0307 10:28:56.205981 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:56.205986 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:56.207374 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:56.207390 7018 round_trippers.go:577] Response Headers:
I0307 10:28:56.207399 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:56.207406 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:56.207412 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:56 GMT
I0307 10:28:56.207418 7018 round_trippers.go:580] Audit-Id: 0b890c7d-2626-4ab5-8e75-3a16b9eecf54
I0307 10:28:56.207427 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:56.207433 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:56.207515 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
I0307 10:28:56.705900 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:56.705916 7018 round_trippers.go:469] Request Headers:
I0307 10:28:56.705923 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:56.705928 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:56.707741 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:56.707756 7018 round_trippers.go:577] Response Headers:
I0307 10:28:56.707766 7018 round_trippers.go:580] Audit-Id: 0e59b396-e7bf-4b72-b74c-a01f645f9864
I0307 10:28:56.707778 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:56.707804 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:56.707821 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:56.707834 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:56.707842 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:56 GMT
I0307 10:28:56.707912 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
I0307 10:28:57.206205 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:57.206216 7018 round_trippers.go:469] Request Headers:
I0307 10:28:57.206228 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:57.206234 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:57.207878 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:57.207889 7018 round_trippers.go:577] Response Headers:
I0307 10:28:57.207894 7018 round_trippers.go:580] Audit-Id: a4dcdc28-4a89-41fc-a490-5614c72a2f7c
I0307 10:28:57.207900 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:57.207905 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:57.207913 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:57.207918 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:57.207923 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:57 GMT
I0307 10:28:57.208010 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
I0307 10:28:57.706332 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:57.727379 7018 round_trippers.go:469] Request Headers:
I0307 10:28:57.727424 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:57.727437 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:57.731183 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:57.731198 7018 round_trippers.go:577] Response Headers:
I0307 10:28:57.731206 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:57 GMT
I0307 10:28:57.731221 7018 round_trippers.go:580] Audit-Id: f535ff1c-e3e0-4a4e-acf9-6dabcd316387
I0307 10:28:57.731231 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:57.731241 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:57.731249 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:57.731255 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:57.731338 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
I0307 10:28:57.731568 7018 node_ready.go:58] node "multinode-260000-m02" has status "Ready":"False"
I0307 10:28:58.206943 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:58.206954 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.206960 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.206966 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.208597 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.208612 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.208617 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.208623 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.208628 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.208633 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.208638 7018 round_trippers.go:580] Audit-Id: 14bb95b4-52c5-49f6-baee-19c30e38be33
I0307 10:28:58.208643 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.208733 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1235","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4619 chars]
I0307 10:28:58.208922 7018 node_ready.go:49] node "multinode-260000-m02" has status "Ready":"True"
I0307 10:28:58.208932 7018 node_ready.go:38] duration metric: took 4.5049847s waiting for node "multinode-260000-m02" to be "Ready" ...
I0307 10:28:58.208937 7018 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:58.208966 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
I0307 10:28:58.208970 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.208977 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.208983 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.211168 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:58.211181 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.211186 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.211192 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.211200 7018 round_trippers.go:580] Audit-Id: 9e29ae0f-c0b8-46e2-b2ef-ac7c8b7cd885
I0307 10:28:58.211206 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.211211 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.211218 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.212031 7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1235"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83248 chars]
I0307 10:28:58.213928 7018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.213959 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
I0307 10:28:58.213966 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.213972 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.213977 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.215266 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.215275 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.215280 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.215285 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.215299 7018 round_trippers.go:580] Audit-Id: da0297af-ddf8-40bb-ba7e-ee7c25d1d50b
I0307 10:28:58.215307 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.215315 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.215322 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.215421 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6489 chars]
I0307 10:28:58.215654 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.215660 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.215667 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.215673 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.217001 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.217011 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.217018 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.217023 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.217030 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.217035 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.217044 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.217052 7018 round_trippers.go:580] Audit-Id: bcd5819d-b6c4-402c-84d8-8b34af188a85
I0307 10:28:58.217231 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:58.217408 7018 pod_ready.go:92] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:58.217413 7018 pod_ready.go:81] duration metric: took 3.477588ms waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.217418 7018 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.217449 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
I0307 10:28:58.217455 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.217463 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.217469 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.218541 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.218548 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.218553 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.218559 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.218569 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.218574 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.218579 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.218584 7018 round_trippers.go:580] Audit-Id: 3ecf0cc4-5524-4969-bf64-78cbfa7bcc64
I0307 10:28:58.218670 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"1080","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6056 chars]
I0307 10:28:58.218878 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.218884 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.218890 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.218895 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.220222 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.220239 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.220246 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.220251 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.220256 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.220262 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.220268 7018 round_trippers.go:580] Audit-Id: 16035865-fbff-46a4-82b6-1d4dc225f856
I0307 10:28:58.220272 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.220340 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:58.220511 7018 pod_ready.go:92] pod "etcd-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:58.220516 7018 pod_ready.go:81] duration metric: took 3.092542ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.220524 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.220551 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
I0307 10:28:58.220555 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.220561 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.220566 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.221715 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.221722 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.221727 7018 round_trippers.go:580] Audit-Id: db547fd7-e43b-49f4-9206-870682ba8ead
I0307 10:28:58.221738 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.221744 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.221749 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.221754 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.221769 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.221904 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"1143","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7591 chars]
I0307 10:28:58.222136 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.222142 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.222148 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.222153 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.223204 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.223213 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.223218 7018 round_trippers.go:580] Audit-Id: af2553b3-7312-4d2a-a007-6b34fbaa60fe
I0307 10:28:58.223223 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.223229 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.223233 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.223239 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.223243 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.223402 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:58.223567 7018 pod_ready.go:92] pod "kube-apiserver-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:58.223572 7018 pod_ready.go:81] duration metric: took 3.043676ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.223578 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.223603 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
I0307 10:28:58.223607 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.223624 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.223632 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.224832 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.224840 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.224845 7018 round_trippers.go:580] Audit-Id: 08c9fdf6-3267-4e2e-935f-9c4e84582ec5
I0307 10:28:58.224850 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.224859 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.224864 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.224869 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.224874 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.225199 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1131","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7159 chars]
I0307 10:28:58.225429 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.225437 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.225443 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.225449 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.226687 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.226694 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.226699 7018 round_trippers.go:580] Audit-Id: 7796790d-620c-401a-9f3a-b4ce8b9acc5f
I0307 10:28:58.226704 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.226710 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.226714 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.226719 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.226725 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.226885 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:58.227057 7018 pod_ready.go:92] pod "kube-controller-manager-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:58.227062 7018 pod_ready.go:81] duration metric: took 3.479487ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.227067 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.407059 7018 request.go:622] Waited for 179.951206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
I0307 10:28:58.407094 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
I0307 10:28:58.407101 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.407154 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.407160 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.408789 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:58.408801 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.408809 7018 round_trippers.go:580] Audit-Id: c45ed864-b7ed-4df5-a14e-1c1a9c154846
I0307 10:28:58.408817 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.408824 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.408829 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.408834 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.408845 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.409069 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"1061","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
I0307 10:28:58.608673 7018 request.go:622] Waited for 199.329269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.608848 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:58.608860 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.608872 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.608882 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.611654 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:58.611670 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.611677 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.611684 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.611692 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.611701 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.611709 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.611715 7018 round_trippers.go:580] Audit-Id: 76524fea-611e-49f8-bb7e-5eb3dc168072
I0307 10:28:58.611840 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:58.612099 7018 pod_ready.go:92] pod "kube-proxy-8qwhq" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:58.612108 7018 pod_ready.go:81] duration metric: took 385.031837ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.612116 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:58.808367 7018 request.go:622] Waited for 196.171802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:58.808492 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
I0307 10:28:58.808504 7018 round_trippers.go:469] Request Headers:
I0307 10:28:58.808517 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:58.808529 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:58.811399 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:58.811415 7018 round_trippers.go:577] Response Headers:
I0307 10:28:58.811423 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:58.811429 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:58.811436 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:58 GMT
I0307 10:28:58.811442 7018 round_trippers.go:580] Audit-Id: 3bbb7a3c-520d-4a16-9e4e-62fab5920986
I0307 10:28:58.811449 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:58.811455 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:58.811559 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"1218","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I0307 10:28:59.008406 7018 request.go:622] Waited for 196.512217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:59.008597 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
I0307 10:28:59.008608 7018 round_trippers.go:469] Request Headers:
I0307 10:28:59.008621 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:59.008631 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:59.011231 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:59.011250 7018 round_trippers.go:577] Response Headers:
I0307 10:28:59.011258 7018 round_trippers.go:580] Audit-Id: a7a3df8f-11e9-4890-88c0-bd4fb1da521d
I0307 10:28:59.011266 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:59.011273 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:59.011280 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:59.011289 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:59.011295 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:59 GMT
I0307 10:28:59.011388 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1235","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4619 chars]
I0307 10:28:59.011635 7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:59.011645 7018 pod_ready.go:81] duration metric: took 399.518428ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
I0307 10:28:59.011652 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:59.208322 7018 request.go:622] Waited for 196.555002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:59.208407 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
I0307 10:28:59.208417 7018 round_trippers.go:469] Request Headers:
I0307 10:28:59.208432 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:59.208444 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:59.211802 7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0307 10:28:59.211825 7018 round_trippers.go:577] Response Headers:
I0307 10:28:59.211836 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:59.211865 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:59 GMT
I0307 10:28:59.211875 7018 round_trippers.go:580] Audit-Id: 80279da3-3584-4856-89d4-205b357cfc2e
I0307 10:28:59.211901 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:59.211908 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:59.211916 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:59.212031 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
I0307 10:28:59.407671 7018 request.go:622] Waited for 195.295612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:59.407782 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
I0307 10:28:59.407790 7018 round_trippers.go:469] Request Headers:
I0307 10:28:59.407799 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:59.407807 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:59.409534 7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0307 10:28:59.409543 7018 round_trippers.go:577] Response Headers:
I0307 10:28:59.409549 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:59 GMT
I0307 10:28:59.409562 7018 round_trippers.go:580] Audit-Id: dced968d-8259-48a8-a369-67bdece8d0ff
I0307 10:28:59.409577 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:59.409586 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:59.409591 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:59.409597 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:59.409645 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1109","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4330 chars]
I0307 10:28:59.409824 7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:59.409830 7018 pod_ready.go:81] duration metric: took 398.16179ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
I0307 10:28:59.409836 7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:59.607367 7018 request.go:622] Waited for 197.479712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:59.607426 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
I0307 10:28:59.607435 7018 round_trippers.go:469] Request Headers:
I0307 10:28:59.607535 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:59.607549 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:59.610313 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:59.610332 7018 round_trippers.go:577] Response Headers:
I0307 10:28:59.610344 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:59 GMT
I0307 10:28:59.610351 7018 round_trippers.go:580] Audit-Id: 831ac5c9-6a6e-4238-9a57-e226e9d7fa9a
I0307 10:28:59.610359 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:59.610366 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:59.610373 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:59.610380 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:59.610482 7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1139","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4889 chars]
I0307 10:28:59.807243 7018 request.go:622] Waited for 196.466836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:59.807382 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
I0307 10:28:59.807393 7018 round_trippers.go:469] Request Headers:
I0307 10:28:59.807405 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:28:59.807416 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:28:59.809503 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:28:59.809522 7018 round_trippers.go:577] Response Headers:
I0307 10:28:59.809534 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:28:59.809565 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:28:59 GMT
I0307 10:28:59.809578 7018 round_trippers.go:580] Audit-Id: 0db6ab63-4a4e-453d-ac64-1584164a0c7d
I0307 10:28:59.809586 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:28:59.809593 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:28:59.809600 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:28:59.809729 7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
I0307 10:28:59.810013 7018 pod_ready.go:92] pod "kube-scheduler-multinode-260000" in "kube-system" namespace has status "Ready":"True"
I0307 10:28:59.810022 7018 pod_ready.go:81] duration metric: took 400.179443ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
I0307 10:28:59.810030 7018 pod_ready.go:38] duration metric: took 1.60107891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 10:28:59.810045 7018 system_svc.go:44] waiting for kubelet service to be running ....
I0307 10:28:59.810114 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 10:28:59.818885 7018 system_svc.go:56] duration metric: took 8.836426ms WaitForService to wait for kubelet.
I0307 10:28:59.818896 7018 kubeadm.go:578] duration metric: took 6.228675231s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0307 10:28:59.818910 7018 node_conditions.go:102] verifying NodePressure condition ...
I0307 10:29:00.007159 7018 request.go:622] Waited for 188.194062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes
I0307 10:29:00.007207 7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
I0307 10:29:00.007270 7018 round_trippers.go:469] Request Headers:
I0307 10:29:00.007282 7018 round_trippers.go:473] Accept: application/json, */*
I0307 10:29:00.007294 7018 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0307 10:29:00.010101 7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0307 10:29:00.010120 7018 round_trippers.go:577] Response Headers:
I0307 10:29:00.010131 7018 round_trippers.go:580] Audit-Id: 230c0ab3-666e-4727-a5a5-c4ebee390789
I0307 10:29:00.010139 7018 round_trippers.go:580] Cache-Control: no-cache, private
I0307 10:29:00.010146 7018 round_trippers.go:580] Content-Type: application/json
I0307 10:29:00.010153 7018 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
I0307 10:29:00.010162 7018 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
I0307 10:29:00.010174 7018 round_trippers.go:580] Date: Tue, 07 Mar 2023 18:29:00 GMT
I0307 10:29:00.010474 7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1235"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16317 chars]
I0307 10:29:00.011046 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:29:00.011058 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:29:00.011066 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:29:00.011071 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:29:00.011075 7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 10:29:00.011082 7018 node_conditions.go:123] node cpu capacity is 2
I0307 10:29:00.011087 7018 node_conditions.go:105] duration metric: took 192.17207ms to run NodePressure ...
I0307 10:29:00.011096 7018 start.go:228] waiting for startup goroutines ...
I0307 10:29:00.011118 7018 start.go:242] writing updated cluster config ...
I0307 10:29:00.011876 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:29:00.012002 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:29:00.054733 7018 out.go:177] * Starting worker node multinode-260000-m03 in cluster multinode-260000
I0307 10:29:00.075685 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:29:00.075744 7018 cache.go:57] Caching tarball of preloaded images
I0307 10:29:00.075937 7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0307 10:29:00.075956 7018 cache.go:60] Finished verifying existence of preloaded tar for v1.26.2 on docker
I0307 10:29:00.076097 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:29:00.077109 7018 cache.go:193] Successfully downloaded all kic artifacts
I0307 10:29:00.077151 7018 start.go:364] acquiring machines lock for multinode-260000-m03: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 10:29:00.077243 7018 start.go:368] acquired machines lock for "multinode-260000-m03" in 73.572µs
I0307 10:29:00.077280 7018 start.go:96] Skipping create...Using existing machine configuration
I0307 10:29:00.077288 7018 fix.go:55] fixHost starting: m03
I0307 10:29:00.077721 7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0307 10:29:00.077794 7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0307 10:29:00.085146 7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51690
I0307 10:29:00.085469 7018 main.go:141] libmachine: () Calling .GetVersion
I0307 10:29:00.085788 7018 main.go:141] libmachine: Using API Version 1
I0307 10:29:00.085809 7018 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 10:29:00.086053 7018 main.go:141] libmachine: () Calling .GetMachineName
I0307 10:29:00.086177 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:00.086254 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetState
I0307 10:29:00.086348 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:29:00.086412 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid from json: 6959
I0307 10:29:00.087210 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid 6959 missing from process table
I0307 10:29:00.087228 7018 fix.go:103] recreateIfNeeded on multinode-260000-m03: state=Stopped err=<nil>
I0307 10:29:00.087236 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
W0307 10:29:00.087313 7018 fix.go:129] unexpected machine state, will restart: <nil>
I0307 10:29:00.108838 7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000-m03" ...
I0307 10:29:00.150753 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .Start
I0307 10:29:00.151097 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:29:00.151124 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid
I0307 10:29:00.151193 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Using UUID 79b2bd18-bd15-11ed-8f77-149d997fca88
I0307 10:29:00.180096 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Generated MAC 12:aa:e8:53:6e:6b
I0307 10:29:00.180120 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
I0307 10:29:00.180266 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"79b2bd18-bd15-11ed-8f77-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002c11a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
I0307 10:29:00.180309 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"79b2bd18-bd15-11ed-8f77-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002c11a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
I0307 10:29:00.180345 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "79b2bd18-bd15-11ed-8f77-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/multinode-260000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage,/Users/j
enkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
I0307 10:29:00.180370 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 79b2bd18-bd15-11ed-8f77-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/multinode-260000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/mult
inode-260000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
I0307 10:29:00.180383 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0307 10:29:00.181671 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Pid is 7128
I0307 10:29:00.182013 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Attempt 0
I0307 10:29:00.182028 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0307 10:29:00.182112 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid from json: 7128
I0307 10:29:00.183032 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Searching for 12:aa:e8:53:6e:6b in /var/db/dhcpd_leases ...
I0307 10:29:00.183093 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Found 14 entries in /var/db/dhcpd_leases!
I0307 10:29:00.183123 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d3d8}
I0307 10:29:00.183132 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d38e}
I0307 10:29:00.183144 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
I0307 10:29:00.183153 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Found match: 12:aa:e8:53:6e:6b
I0307 10:29:00.183173 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | IP: 192.168.64.15
I0307 10:29:00.183209 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetConfigRaw
I0307 10:29:00.183787 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
I0307 10:29:00.183966 7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
I0307 10:29:00.184309 7018 machine.go:88] provisioning docker machine ...
I0307 10:29:00.184319 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:00.184441 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
I0307 10:29:00.184532 7018 buildroot.go:166] provisioning hostname "multinode-260000-m03"
I0307 10:29:00.184543 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
I0307 10:29:00.184630 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:00.184704 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:00.184784 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:00.184866 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:00.184944 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:00.185055 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:00.185361 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:00.185370 7018 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-260000-m03 && echo "multinode-260000-m03" | sudo tee /etc/hostname
I0307 10:29:00.188080 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0307 10:29:00.195643 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0307 10:29:00.196371 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:29:00.196384 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:29:00.196392 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:29:00.196404 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:29:00.552977 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0307 10:29:00.552995 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0307 10:29:00.657061 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0307 10:29:00.657081 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0307 10:29:00.657091 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0307 10:29:00.657102 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0307 10:29:00.657942 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0307 10:29:00.657953 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0307 10:29:05.166903 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0307 10:29:05.166935 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0307 10:29:05.166942 7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0307 10:29:11.261985 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000-m03
I0307 10:29:11.262003 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.262135 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:11.262237 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.262323 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.262404 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:11.262539 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:11.262858 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:11.262870 7018 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-260000-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-260000-m03' | sudo tee -a /etc/hosts;
fi
fi
I0307 10:29:11.336626 7018 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 10:29:11.336642 7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
I0307 10:29:11.336650 7018 buildroot.go:174] setting up certificates
I0307 10:29:11.336658 7018 provision.go:83] configureAuth start
I0307 10:29:11.336666 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
I0307 10:29:11.336795 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
I0307 10:29:11.336894 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.336973 7018 provision.go:138] copyHostCerts
I0307 10:29:11.337009 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:29:11.337059 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
I0307 10:29:11.337064 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
I0307 10:29:11.337174 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
I0307 10:29:11.337363 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:29:11.337395 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
I0307 10:29:11.337400 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
I0307 10:29:11.337460 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
I0307 10:29:11.337578 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:29:11.337610 7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
I0307 10:29:11.337615 7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
I0307 10:29:11.337670 7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
I0307 10:29:11.337789 7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000-m03 san=[192.168.64.15 192.168.64.15 localhost 127.0.0.1 minikube multinode-260000-m03]
I0307 10:29:11.427111 7018 provision.go:172] copyRemoteCerts
I0307 10:29:11.427165 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 10:29:11.427179 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.427324 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:11.427419 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.427541 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:11.427623 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
I0307 10:29:11.465606 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0307 10:29:11.465676 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0307 10:29:11.481351 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
I0307 10:29:11.481417 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0307 10:29:11.496933 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0307 10:29:11.496996 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0307 10:29:11.512347 7018 provision.go:86] duration metric: configureAuth took 175.680754ms
I0307 10:29:11.512360 7018 buildroot.go:189] setting minikube options for container-runtime
I0307 10:29:11.512526 7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 10:29:11.512539 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:11.512663 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.512758 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:11.512840 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.512918 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.512998 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:11.513100 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:11.513391 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:11.513399 7018 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0307 10:29:11.579311 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0307 10:29:11.579323 7018 buildroot.go:70] root file system type: tmpfs
I0307 10:29:11.579401 7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0307 10:29:11.579411 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.579540 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:11.579641 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.579740 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.579829 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:11.579956 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:11.580270 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:11.580316 7018 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.64.12"
Environment="NO_PROXY=192.168.64.12,192.168.64.13"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0307 10:29:11.652702 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.64.12
Environment=NO_PROXY=192.168.64.12,192.168.64.13
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0307 10:29:11.652720 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:11.652848 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:11.652922 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.653006 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:11.653098 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:11.653250 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:11.653560 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:11.653573 7018 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0307 10:29:12.175360 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0307 10:29:12.175374 7018 machine.go:91] provisioned docker machine in 11.991002684s
I0307 10:29:12.175381 7018 start.go:300] post-start starting for "multinode-260000-m03" (driver="hyperkit")
I0307 10:29:12.175386 7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 10:29:12.175396 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:12.175581 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 10:29:12.175596 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:12.175686 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:12.175759 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:12.175827 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:12.175912 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
I0307 10:29:12.214369 7018 ssh_runner.go:195] Run: cat /etc/os-release
I0307 10:29:12.216755 7018 command_runner.go:130] > NAME=Buildroot
I0307 10:29:12.216767 7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
I0307 10:29:12.216773 7018 command_runner.go:130] > ID=buildroot
I0307 10:29:12.216793 7018 command_runner.go:130] > VERSION_ID=2021.02.12
I0307 10:29:12.216800 7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0307 10:29:12.216963 7018 info.go:137] Remote host: Buildroot 2021.02.12
I0307 10:29:12.216972 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
I0307 10:29:12.217057 7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
I0307 10:29:12.217200 7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
I0307 10:29:12.217206 7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
I0307 10:29:12.217370 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0307 10:29:12.223606 7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
I0307 10:29:12.239878 7018 start.go:303] post-start completed in 64.487773ms
I0307 10:29:12.239896 7018 fix.go:57] fixHost completed within 12.162546961s
I0307 10:29:12.239910 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:12.240038 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:12.240131 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:12.240212 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:12.240290 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:12.240409 7018 main.go:141] libmachine: Using SSH client type: native
I0307 10:29:12.240714 7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil> [] 0s} 192.168.64.15 22 <nil> <nil>}
I0307 10:29:12.240722 7018 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0307 10:29:12.305514 7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213752.437212482
I0307 10:29:12.305525 7018 fix.go:207] guest clock: 1678213752.437212482
I0307 10:29:12.305531 7018 fix.go:220] Guest: 2023-03-07 10:29:12.437212482 -0800 PST Remote: 2023-03-07 10:29:12.239899 -0800 PST m=+114.574278242 (delta=197.313482ms)
I0307 10:29:12.305540 7018 fix.go:191] guest clock delta is within tolerance: 197.313482ms
I0307 10:29:12.305543 7018 start.go:83] releasing machines lock for "multinode-260000-m03", held for 12.228234634s
I0307 10:29:12.305562 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:12.305681 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
I0307 10:29:12.327827 7018 out.go:177] * Found network options:
I0307 10:29:12.349261 7018 out.go:177] - NO_PROXY=192.168.64.12,192.168.64.13
W0307 10:29:12.371206 7018 proxy.go:119] fail to check proxy env: Error ip not in block
W0307 10:29:12.371232 7018 proxy.go:119] fail to check proxy env: Error ip not in block
I0307 10:29:12.371252 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:12.372006 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:12.372213 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
I0307 10:29:12.372340 7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0307 10:29:12.372393 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
W0307 10:29:12.372424 7018 proxy.go:119] fail to check proxy env: Error ip not in block
W0307 10:29:12.372448 7018 proxy.go:119] fail to check proxy env: Error ip not in block
I0307 10:29:12.372546 7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0307 10:29:12.372566 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
I0307 10:29:12.372582 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:12.372778 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
I0307 10:29:12.372789 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:12.372944 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:12.372988 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
I0307 10:29:12.373142 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
I0307 10:29:12.373168 7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
I0307 10:29:12.373363 7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
I0307 10:29:12.410014 7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0307 10:29:12.410159 7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0307 10:29:12.410222 7018 ssh_runner.go:195] Run: which cri-dockerd
I0307 10:29:12.452473 7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0307 10:29:12.452552 7018 command_runner.go:130] > /usr/bin/cri-dockerd
I0307 10:29:12.452679 7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0307 10:29:12.459245 7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0307 10:29:12.470219 7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 10:29:12.486201 7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0307 10:29:12.486242 7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0307 10:29:12.486250 7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 10:29:12.486346 7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 10:29:12.502691 7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
I0307 10:29:12.502703 7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
I0307 10:29:12.502708 7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
I0307 10:29:12.502712 7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
I0307 10:29:12.502716 7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
I0307 10:29:12.502719 7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0307 10:29:12.502723 7018 command_runner.go:130] > registry.k8s.io/pause:3.9
I0307 10:29:12.502728 7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0307 10:29:12.502732 7018 command_runner.go:130] > registry.k8s.io/pause:3.6
I0307 10:29:12.502737 7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0307 10:29:12.503864 7018 docker.go:630] Got preloaded images: -- stdout --
kindest/kindnetd:v20230227-15197099
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 10:29:12.503874 7018 docker.go:560] Images already preloaded, skipping extraction
I0307 10:29:12.503880 7018 start.go:485] detecting cgroup driver to use...
I0307 10:29:12.503940 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:29:12.523327 7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0307 10:29:12.523340 7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0307 10:29:12.524671 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0307 10:29:12.536597 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 10:29:12.544140 7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0307 10:29:12.544193 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0307 10:29:12.550489 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:29:12.556842 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 10:29:12.563095 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 10:29:12.569445 7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 10:29:12.575946 7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 10:29:12.582556 7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 10:29:12.588055 7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0307 10:29:12.588181 7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 10:29:12.594025 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:29:12.673337 7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 10:29:12.685510 7018 start.go:485] detecting cgroup driver to use...
I0307 10:29:12.685584 7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0307 10:29:12.695059 7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0307 10:29:12.696323 7018 command_runner.go:130] > [Unit]
I0307 10:29:12.696352 7018 command_runner.go:130] > Description=Docker Application Container Engine
I0307 10:29:12.696362 7018 command_runner.go:130] > Documentation=https://docs.docker.com
I0307 10:29:12.696367 7018 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0307 10:29:12.696371 7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0307 10:29:12.696375 7018 command_runner.go:130] > StartLimitBurst=3
I0307 10:29:12.696382 7018 command_runner.go:130] > StartLimitIntervalSec=60
I0307 10:29:12.696388 7018 command_runner.go:130] > [Service]
I0307 10:29:12.696393 7018 command_runner.go:130] > Type=notify
I0307 10:29:12.696397 7018 command_runner.go:130] > Restart=on-failure
I0307 10:29:12.696402 7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12
I0307 10:29:12.696406 7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12,192.168.64.13
I0307 10:29:12.696413 7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0307 10:29:12.696422 7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0307 10:29:12.696428 7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0307 10:29:12.696433 7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0307 10:29:12.696439 7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0307 10:29:12.696445 7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0307 10:29:12.696454 7018 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0307 10:29:12.696462 7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0307 10:29:12.696468 7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0307 10:29:12.696471 7018 command_runner.go:130] > ExecStart=
I0307 10:29:12.696485 7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
I0307 10:29:12.696489 7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0307 10:29:12.696497 7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0307 10:29:12.696503 7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0307 10:29:12.696506 7018 command_runner.go:130] > LimitNOFILE=infinity
I0307 10:29:12.696510 7018 command_runner.go:130] > LimitNPROC=infinity
I0307 10:29:12.696514 7018 command_runner.go:130] > LimitCORE=infinity
I0307 10:29:12.696519 7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0307 10:29:12.696524 7018 command_runner.go:130] > # Only systemd 226 and above support this version.
I0307 10:29:12.696527 7018 command_runner.go:130] > TasksMax=infinity
I0307 10:29:12.696531 7018 command_runner.go:130] > TimeoutStartSec=0
I0307 10:29:12.696536 7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0307 10:29:12.696540 7018 command_runner.go:130] > Delegate=yes
I0307 10:29:12.696549 7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0307 10:29:12.696553 7018 command_runner.go:130] > KillMode=process
I0307 10:29:12.696557 7018 command_runner.go:130] > [Install]
I0307 10:29:12.696562 7018 command_runner.go:130] > WantedBy=multi-user.target
I0307 10:29:12.696635 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:29:12.705902 7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0307 10:29:12.738895 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 10:29:12.747844 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:29:12.756435 7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0307 10:29:12.775075 7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 10:29:12.783647 7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 10:29:12.795348 7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:29:12.795358 7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0307 10:29:12.795646 7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0307 10:29:12.877113 7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0307 10:29:12.966218 7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0307 10:29:12.966234 7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0307 10:29:12.977829 7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 10:29:13.058533 7018 ssh_runner.go:195] Run: sudo systemctl restart docker
I0307 10:30:14.087064 7018 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
I0307 10:30:14.087078 7018 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
I0307 10:30:14.087168 7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.028339517s)
I0307 10:30:14.108918 7018 out.go:177]
W0307 10:30:14.130829 7018 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
W0307 10:30:14.130853 7018 out.go:239] *
W0307 10:30:14.131956 7018 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 10:30:14.211985 7018 out.go:177]
*
* ==> Docker <==
* -- Journal begins at Tue 2023-03-07 18:27:25 UTC, ends at Tue 2023-03-07 18:30:15 UTC. --
Mar 07 18:28:29 multinode-260000 dockerd[823]: time="2023-03-07T18:28:29.553667299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:28:29 multinode-260000 dockerd[823]: time="2023-03-07T18:28:29.553718416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:28:29 multinode-260000 dockerd[823]: time="2023-03-07T18:28:29.553727844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:28:29 multinode-260000 dockerd[823]: time="2023-03-07T18:28:29.553859099Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b5c1d8a91fa2516e8c80365df84e3b130f3c1999b14147c7032297de307867f9 pid=2478 runtime=io.containerd.runc.v2
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.057242848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.057307249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.057316475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.057844466Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ac47899a738296394ded2ce0496525097adb38a6412d8fc94b3dce6877e8a33a pid=2668 runtime=io.containerd.runc.v2
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.175000064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.175197971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.175260153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.175483178Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b76d3e91590c9da6205b8d32d4b932be8104ea717355bd7711e406514dad7dd9 pid=2747 runtime=io.containerd.runc.v2
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.679769977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.679902993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.679926964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.680065305Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ae65d8b310bf85e2ffa8376af1ac87de2288952c06097e5f806c1db9cba7f352 pid=2836 runtime=io.containerd.runc.v2
Mar 07 18:28:44 multinode-260000 dockerd[823]: time="2023-03-07T18:28:44.600236890Z" level=info msg="shim disconnected" id=fb55a8f7e7acf79ab5acef082e9687db3c86b8350d3822b8162a5264fa8a8737
Mar 07 18:28:44 multinode-260000 dockerd[823]: time="2023-03-07T18:28:44.600660414Z" level=warning msg="cleaning up after shim disconnected" id=fb55a8f7e7acf79ab5acef082e9687db3c86b8350d3822b8162a5264fa8a8737 namespace=moby
Mar 07 18:28:44 multinode-260000 dockerd[823]: time="2023-03-07T18:28:44.600692801Z" level=info msg="cleaning up dead shim"
Mar 07 18:28:44 multinode-260000 dockerd[817]: time="2023-03-07T18:28:44.600890785Z" level=info msg="ignoring event" container=fb55a8f7e7acf79ab5acef082e9687db3c86b8350d3822b8162a5264fa8a8737 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 07 18:28:44 multinode-260000 dockerd[823]: time="2023-03-07T18:28:44.610196627Z" level=warning msg="cleanup warnings time=\"2023-03-07T18:28:44Z\" level=info msg=\"starting signal loop\" namespace=moby pid=3100 runtime=io.containerd.runc.v2\n"
Mar 07 18:28:57 multinode-260000 dockerd[823]: time="2023-03-07T18:28:57.591630748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:28:57 multinode-260000 dockerd[823]: time="2023-03-07T18:28:57.591690709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:28:57 multinode-260000 dockerd[823]: time="2023-03-07T18:28:57.591700253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:28:57 multinode-260000 dockerd[823]: time="2023-03-07T18:28:57.592287539Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d7918bebc54af3eda49d9d26750f59cbb4123a06606515594c02386ec18084eb pid=3307 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
d7918bebc54af 6e38f40d628db About a minute ago Running storage-provisioner 2 195123dbe4fea
ae65d8b310bf8 8c811b4aec35f About a minute ago Running busybox 1 b76d3e91590c9
ac47899a73829 5185b96f0becf About a minute ago Running coredns 1 b5c1d8a91fa25
f4e367464e94a bc00df424dcbf About a minute ago Running kindnet-cni 1 fdbc154f16c5e
b5a7ee396dc60 6f64e7135a6ec 2 minutes ago Running kube-proxy 1 f8fdeffee49cf
fb55a8f7e7acf 6e38f40d628db 2 minutes ago Exited storage-provisioner 1 195123dbe4fea
26cf0a14d586f db8f409d9a5d7 2 minutes ago Running kube-scheduler 1 2553a34510031
84569585e5533 fce326961ae2d 2 minutes ago Running etcd 1 07e789f3cc69b
50c556c12dfe5 240e201d5b0d8 2 minutes ago Running kube-controller-manager 1 9c8e84f5ddfa3
497af6d0e82e1 63d3239c3c159 2 minutes ago Running kube-apiserver 1 00bac04bca161
efd9c03313ad9 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 10 minutes ago Exited busybox 0 6c81d6df615b9
da06b08e56174 5185b96f0becf 11 minutes ago Exited coredns 0 5b66601ca9d1d
37e6cf092e1c2 kindest/kindnetd@sha256:7fc2671641a1a7e7b9b8341964bd7cfe9018f497dc41d58803f88b0cc4030e07 11 minutes ago Exited kindnet-cni 0 ae9d394ad7a79
808d83da8d84b 6f64e7135a6ec 11 minutes ago Exited kube-proxy 0 1bf0ab9eb4c51
2243964fbc4d2 240e201d5b0d8 11 minutes ago Exited kube-controller-manager 0 6ac51e9516a2e
3b27eb7db4c28 fce326961ae2d 11 minutes ago Exited etcd 0 cfcf920b73783
10d167b9d9870 db8f409d9a5d7 11 minutes ago Exited kube-scheduler 0 aef4edf5b492f
3e9b5dec9e21d 63d3239c3c159 11 minutes ago Exited kube-apiserver 0 0721a87b433b9
*
* ==> coredns [ac47899a7382] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:38195 - 29344 "HINFO IN 6793254744361962333.4432823132512362091. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009534934s
*
* ==> coredns [da06b08e5617] <==
* [INFO] 10.244.0.3:49753 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000032613s
[INFO] 10.244.0.3:59499 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078206s
[INFO] 10.244.0.3:56252 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000031284s
[INFO] 10.244.0.3:33352 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000029287s
[INFO] 10.244.0.3:57361 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000030398s
[INFO] 10.244.0.3:53316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000041736s
[INFO] 10.244.0.3:43704 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000030091s
[INFO] 10.244.1.2:35223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133692s
[INFO] 10.244.1.2:37012 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060624s
[INFO] 10.244.1.2:47740 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043499s
[INFO] 10.244.1.2:46035 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059142s
[INFO] 10.244.0.3:34318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102436s
[INFO] 10.244.0.3:55287 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046353s
[INFO] 10.244.0.3:48922 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076473s
[INFO] 10.244.0.3:58811 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000030082s
[INFO] 10.244.1.2:51939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122385s
[INFO] 10.244.1.2:51550 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000067643s
[INFO] 10.244.1.2:46211 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061787s
[INFO] 10.244.1.2:39798 - 5 "PTR IN 1.64.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011631s
[INFO] 10.244.0.3:57108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000237714s
[INFO] 10.244.0.3:59650 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101016s
[INFO] 10.244.0.3:45286 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000054929s
[INFO] 10.244.0.3:38551 - 5 "PTR IN 1.64.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085801s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: multinode-260000
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-260000
kubernetes.io/os=linux
minikube.k8s.io/commit=592b1e9939a898d806f69aad174a19c45f317df1
minikube.k8s.io/name=multinode-260000
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_07T10_18_30_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 07 Mar 2023 18:18:25 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-260000
AcquireTime: <unset>
RenewTime: Tue, 07 Mar 2023 18:30:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 07 Mar 2023 18:28:23 +0000 Tue, 07 Mar 2023 18:18:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 07 Mar 2023 18:28:23 +0000 Tue, 07 Mar 2023 18:18:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 07 Mar 2023 18:28:23 +0000 Tue, 07 Mar 2023 18:18:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 07 Mar 2023 18:28:23 +0000 Tue, 07 Mar 2023 18:28:23 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.12
Hostname: multinode-260000
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2166052Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2166052Ki
pods: 110
System Info:
Machine ID: 7f1600c5bd3943459736c3eb945f3a86
System UUID: 608611ed-0000-0000-9c3c-149d997fca88
Boot ID: 0f92c037-724f-4794-9137-e15efdc0756f
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.2
Kube-Proxy Version: v1.26.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-6b86dd6d48-tw9p8 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10m
kube-system coredns-787d4945fb-x8m8v 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 11m
kube-system etcd-multinode-260000 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 11m
kube-system kindnet-gfgwn 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 11m
kube-system kube-apiserver-multinode-260000 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 11m
kube-system kube-controller-manager-multinode-260000 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 11m
kube-system kube-proxy-8qwhq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 11m
kube-system kube-scheduler-multinode-260000 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 11m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal Starting 2m kube-proxy
Normal NodeHasSufficientPID 11m kubelet Node multinode-260000 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node multinode-260000 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node multinode-260000 status is now: NodeHasNoDiskPressure
Normal Starting 11m kubelet Starting kubelet.
Normal RegisteredNode 11m node-controller Node multinode-260000 event: Registered Node multinode-260000 in Controller
Normal NodeReady 11m kubelet Node multinode-260000 status is now: NodeReady
Normal Starting 2m8s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m8s (x8 over 2m8s) kubelet Node multinode-260000 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m8s (x8 over 2m8s) kubelet Node multinode-260000 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m8s (x7 over 2m8s) kubelet Node multinode-260000 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m8s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 110s node-controller Node multinode-260000 event: Registered Node multinode-260000 in Controller
Name: multinode-260000-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-260000-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 07 Mar 2023 18:28:52 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-260000-m02
AcquireTime: <unset>
RenewTime: Tue, 07 Mar 2023 18:30:13 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 07 Mar 2023 18:28:57 +0000 Tue, 07 Mar 2023 18:28:51 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 07 Mar 2023 18:28:57 +0000 Tue, 07 Mar 2023 18:28:51 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 07 Mar 2023 18:28:57 +0000 Tue, 07 Mar 2023 18:28:51 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 07 Mar 2023 18:28:57 +0000 Tue, 07 Mar 2023 18:28:57 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.13
Hostname: multinode-260000-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2166052Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2166052Ki
pods: 110
System Info:
Machine ID: 55a0b35cd1c54e50a53cf57138dc4032
System UUID: 835411ed-0000-0000-9c3c-149d997fca88
Boot ID: d711e229-1b86-4d4e-835b-240f221511a4
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.2
Kube-Proxy Version: v1.26.2
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-z6kqp 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 11m
kube-system kube-proxy-pxshj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 81s kube-proxy
Normal Starting 10m kube-proxy
Normal NodeHasSufficientMemory 11m (x2 over 11m) kubelet Node multinode-260000-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m (x2 over 11m) kubelet Node multinode-260000-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m (x2 over 11m) kubelet Node multinode-260000-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal Starting 11m kubelet Starting kubelet.
Normal NodeReady 10m kubelet Node multinode-260000-m02 status is now: NodeReady
Normal NodeHasSufficientMemory 85s (x2 over 85s) kubelet Node multinode-260000-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 85s (x2 over 85s) kubelet Node multinode-260000-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 85s (x2 over 85s) kubelet Node multinode-260000-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 85s kubelet Updated Node Allocatable limit across pods
Normal Starting 85s kubelet Starting kubelet.
Normal NodeReady 79s kubelet Node multinode-260000-m02 status is now: NodeReady
Name: multinode-260000-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-260000-m03
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 07 Mar 2023 18:26:48 +0000
Taints: node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unreachable:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: multinode-260000-m03
AcquireTime: <unset>
RenewTime: Tue, 07 Mar 2023 18:26:57 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure Unknown Tue, 07 Mar 2023 18:26:56 +0000 Tue, 07 Mar 2023 18:29:06 +0000 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Tue, 07 Mar 2023 18:26:56 +0000 Tue, 07 Mar 2023 18:29:06 +0000 NodeStatusUnknown Kubelet stopped posting node status.
PIDPressure Unknown Tue, 07 Mar 2023 18:26:56 +0000 Tue, 07 Mar 2023 18:29:06 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Ready Unknown Tue, 07 Mar 2023 18:26:56 +0000 Tue, 07 Mar 2023 18:29:06 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Addresses:
InternalIP: 192.168.64.15
Hostname: multinode-260000-m03
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2166052Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2166052Ki
pods: 110
System Info:
Machine ID: 9735f5acb6484418a3add414f55a7294
System UUID: 79b211ed-0000-0000-8f77-149d997fca88
Boot ID: 861c3d48-b666-437f-8ed9-d8b1fb470f7a
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.2
Kube-Proxy Version: v1.26.2
PodCIDR: 10.244.3.0/24
PodCIDRs: 10.244.3.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-6b86dd6d48-dxpfk 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 88s
kube-system kindnet-j5gj9 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 4m10s
kube-system kube-proxy-q8cm8 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m10s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m4s kube-proxy
Normal Starting 3m25s kube-proxy
Normal Starting 4m11s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m11s (x2 over 4m11s) kubelet Node multinode-260000-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m11s (x2 over 4m11s) kubelet Node multinode-260000-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m11s (x2 over 4m11s) kubelet Node multinode-260000-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m11s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 3m57s kubelet Node multinode-260000-m03 status is now: NodeReady
Normal Starting 3m29s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m29s (x2 over 3m29s) kubelet Node multinode-260000-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m29s (x2 over 3m29s) kubelet Node multinode-260000-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m29s (x2 over 3m29s) kubelet Node multinode-260000-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m29s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 3m20s kubelet Node multinode-260000-m03 status is now: NodeReady
Normal RegisteredNode 110s node-controller Node multinode-260000-m03 event: Registered Node multinode-260000-m03 in Controller
Normal NodeNotReady 70s node-controller Node multinode-260000-m03 status is now: NodeNotReady
*
* ==> dmesg <==
* [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.027546] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
[ +4.610481] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
[ +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
[ +0.006956] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.234653] systemd-fstab-generator[125]: Ignoring "noauto" for root device
[ +0.038923] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.865054] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
[ +26.325676] systemd-fstab-generator[519]: Ignoring "noauto" for root device
[ +0.079667] systemd-fstab-generator[530]: Ignoring "noauto" for root device
[ +0.829195] systemd-fstab-generator[748]: Ignoring "noauto" for root device
[ +0.185897] systemd-fstab-generator[784]: Ignoring "noauto" for root device
[ +0.081951] systemd-fstab-generator[795]: Ignoring "noauto" for root device
[ +0.091850] systemd-fstab-generator[808]: Ignoring "noauto" for root device
[ +1.339091] systemd-fstab-generator[964]: Ignoring "noauto" for root device
[ +0.092219] systemd-fstab-generator[975]: Ignoring "noauto" for root device
[ +0.090588] systemd-fstab-generator[986]: Ignoring "noauto" for root device
[ +0.093612] systemd-fstab-generator[997]: Ignoring "noauto" for root device
[Mar 7 18:28] systemd-fstab-generator[1233]: Ignoring "noauto" for root device
[ +0.239907] kauditd_printk_skb: 67 callbacks suppressed
[ +7.083787] kauditd_printk_skb: 8 callbacks suppressed
[ +10.861594] kauditd_printk_skb: 16 callbacks suppressed
*
* ==> etcd [3b27eb7db4c2] <==
* {"level":"info","ts":"2023-03-07T18:18:23.584Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 is starting a new election at term 1"}
{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became pre-candidate at term 1"}
{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgPreVoteResp from 893b0beac40933c0 at term 1"}
{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became candidate at term 2"}
{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgVoteResp from 893b0beac40933c0 at term 2"}
{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became leader at term 2"}
{"level":"info","ts":"2023-03-07T18:18:24.258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 893b0beac40933c0 elected leader 893b0beac40933c0 at term 2"}
{"level":"info","ts":"2023-03-07T18:18:24.260Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-07T18:18:24.261Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"893b0beac40933c0","local-member-attributes":"{Name:multinode-260000 ClientURLs:[https://192.168.64.12:2379]}","request-path":"/0/members/893b0beac40933c0/attributes","cluster-id":"51ecae2d8304f353","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-07T18:18:24.261Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:18:24.262Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-07T18:18:24.262Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:18:24.263Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-07T18:18:24.281Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-07T18:18:24.281Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-07T18:18:24.263Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-07T18:18:24.281Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-07T18:18:24.283Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.12:2379"}
{"level":"info","ts":"2023-03-07T18:26:59.661Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-03-07T18:26:59.661Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"multinode-260000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"]}
{"level":"info","ts":"2023-03-07T18:26:59.676Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"893b0beac40933c0","current-leader-member-id":"893b0beac40933c0"}
{"level":"info","ts":"2023-03-07T18:26:59.677Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.64.12:2380"}
{"level":"info","ts":"2023-03-07T18:26:59.678Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.64.12:2380"}
{"level":"info","ts":"2023-03-07T18:26:59.678Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"multinode-260000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"]}
*
* ==> etcd [84569585e553] <==
* {"level":"info","ts":"2023-03-07T18:28:10.548Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","added-peer-id":"893b0beac40933c0","added-peer-peer-urls":["https://192.168.64.12:2380"]}
{"level":"info","ts":"2023-03-07T18:28:10.549Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-07T18:28:10.549Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-07T18:28:10.550Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-07T18:28:10.550Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"893b0beac40933c0","initial-advertise-peer-urls":["https://192.168.64.12:2380"],"listen-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.12:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-07T18:28:10.550Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-07T18:28:10.554Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-07T18:28:10.555Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-07T18:28:10.555Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-07T18:28:10.555Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.64.12:2380"}
{"level":"info","ts":"2023-03-07T18:28:10.555Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.64.12:2380"}
{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 is starting a new election at term 2"}
{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became pre-candidate at term 2"}
{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgPreVoteResp from 893b0beac40933c0 at term 2"}
{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became candidate at term 3"}
{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgVoteResp from 893b0beac40933c0 at term 3"}
{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became leader at term 3"}
{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 893b0beac40933c0 elected leader 893b0beac40933c0 at term 3"}
{"level":"info","ts":"2023-03-07T18:28:11.926Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"893b0beac40933c0","local-member-attributes":"{Name:multinode-260000 ClientURLs:[https://192.168.64.12:2379]}","request-path":"/0/members/893b0beac40933c0/attributes","cluster-id":"51ecae2d8304f353","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-07T18:28:11.926Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:28:11.927Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-07T18:28:11.927Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-07T18:28:11.926Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:28:11.929Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-07T18:28:11.932Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.12:2379"}
*
* ==> kernel <==
* 18:30:16 up 2 min, 0 users, load average: 0.12, 0.06, 0.01
Linux multinode-260000 5.10.57 #1 SMP Fri Feb 24 23:00:41 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kindnet [37e6cf092e1c] <==
* I0307 18:26:28.447090 1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
I0307 18:26:28.447127 1 main.go:227] handling current node
I0307 18:26:28.447136 1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
I0307 18:26:28.447141 1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24]
I0307 18:26:28.447369 1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
I0307 18:26:28.447402 1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.2.0/24]
I0307 18:26:38.451649 1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
I0307 18:26:38.451684 1 main.go:227] handling current node
I0307 18:26:38.451692 1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
I0307 18:26:38.451696 1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24]
I0307 18:26:38.451779 1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
I0307 18:26:38.451806 1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.2.0/24]
I0307 18:26:48.456938 1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
I0307 18:26:48.457106 1 main.go:227] handling current node
I0307 18:26:48.457237 1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
I0307 18:26:48.457326 1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24]
I0307 18:26:48.457646 1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
I0307 18:26:48.457815 1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24]
I0307 18:26:48.457898 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.64.15 Flags: [] Table: 0}
I0307 18:26:58.466487 1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
I0307 18:26:58.466647 1 main.go:227] handling current node
I0307 18:26:58.466702 1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
I0307 18:26:58.466828 1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24]
I0307 18:26:58.466977 1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
I0307 18:26:58.467105 1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24]
*
* ==> kindnet [f4e367464e94] <==
* I0307 18:29:27.914842 1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24]
I0307 18:29:37.926398 1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
I0307 18:29:37.926411 1 main.go:227] handling current node
I0307 18:29:37.926418 1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
I0307 18:29:37.926421 1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24]
I0307 18:29:37.926480 1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
I0307 18:29:37.926485 1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24]
I0307 18:29:47.934496 1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
I0307 18:29:47.934564 1 main.go:227] handling current node
I0307 18:29:47.934612 1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
I0307 18:29:47.934691 1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24]
I0307 18:29:47.934816 1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
I0307 18:29:47.934920 1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24]
I0307 18:29:57.940656 1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
I0307 18:29:57.940856 1 main.go:227] handling current node
I0307 18:29:57.940976 1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
I0307 18:29:57.941042 1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24]
I0307 18:29:57.941226 1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
I0307 18:29:57.941331 1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24]
I0307 18:30:07.945187 1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
I0307 18:30:07.945222 1 main.go:227] handling current node
I0307 18:30:07.945230 1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
I0307 18:30:07.945235 1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24]
I0307 18:30:07.945510 1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
I0307 18:30:07.945735 1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24]
*
* ==> kube-apiserver [3e9b5dec9e21] <==
* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0307 18:26:59.674314 1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0307 18:26:59.674348 1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0307 18:26:59.674380 1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
*
* ==> kube-apiserver [497af6d0e82e] <==
* I0307 18:28:13.110130 1 establishing_controller.go:76] Starting EstablishingController
I0307 18:28:13.110138 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0307 18:28:13.110297 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0307 18:28:13.110308 1 crd_finalizer.go:266] Starting CRDFinalizer
I0307 18:28:13.110418 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0307 18:28:13.110677 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0307 18:28:13.216868 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0307 18:28:13.222242 1 shared_informer.go:280] Caches are synced for node_authorizer
I0307 18:28:13.225197 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0307 18:28:13.291153 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0307 18:28:13.292250 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0307 18:28:13.292257 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0307 18:28:13.292505 1 cache.go:39] Caches are synced for autoregister controller
I0307 18:28:13.294755 1 shared_informer.go:280] Caches are synced for configmaps
I0307 18:28:13.300044 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0307 18:28:13.300090 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0307 18:28:13.906724 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0307 18:28:14.098838 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0307 18:28:15.623589 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0307 18:28:15.731369 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0307 18:28:15.739807 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0307 18:28:15.781083 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0307 18:28:15.785959 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0307 18:28:26.361442 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0307 18:28:26.382833 1 controller.go:615] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [2243964fbc4d] <==
* I0307 18:18:51.296089 1 node_lifecycle_controller.go:1231] Controller detected that some Nodes are Ready. Exiting master disruption mode.
W0307 18:19:13.246738 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-260000-m02" does not exist
I0307 18:19:13.257767 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pxshj"
I0307 18:19:13.263613 1 range_allocator.go:372] Set node multinode-260000-m02 PodCIDR to [10.244.1.0/24]
I0307 18:19:13.263757 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-z6kqp"
W0307 18:19:16.299141 1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-260000-m02. Assuming now as a timestamp.
I0307 18:19:16.299544 1 event.go:294] "Event occurred" object="multinode-260000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-260000-m02 event: Registered Node multinode-260000-m02 in Controller"
W0307 18:19:26.716043 1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
I0307 18:19:28.956380 1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
I0307 18:19:28.986793 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-dmrds"
I0307 18:19:28.992627 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-tw9p8"
I0307 18:19:31.308952 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-dmrds" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-dmrds"
W0307 18:26:06.133983 1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
W0307 18:26:06.134312 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-260000-m03" does not exist
I0307 18:26:06.140204 1 range_allocator.go:372] Set node multinode-260000-m03 PodCIDR to [10.244.2.0/24]
I0307 18:26:06.145606 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j5gj9"
I0307 18:26:06.155170 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q8cm8"
W0307 18:26:06.393676 1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-260000-m03. Assuming now as a timestamp.
I0307 18:26:06.393890 1 event.go:294] "Event occurred" object="multinode-260000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-260000-m03 event: Registered Node multinode-260000-m03 in Controller"
W0307 18:26:19.558458 1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
W0307 18:26:47.345943 1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
W0307 18:26:48.162669 1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
W0307 18:26:48.162859 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-260000-m03" does not exist
I0307 18:26:48.168066 1 range_allocator.go:372] Set node multinode-260000-m03 PodCIDR to [10.244.3.0/24]
W0307 18:26:56.666685 1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m03 node
*
* ==> kube-controller-manager [50c556c12dfe] <==
* I0307 18:28:26.349894 1 shared_informer.go:280] Caches are synced for deployment
I0307 18:28:26.352648 1 shared_informer.go:280] Caches are synced for ReplicaSet
I0307 18:28:26.355935 1 shared_informer.go:280] Caches are synced for disruption
I0307 18:28:26.360793 1 shared_informer.go:280] Caches are synced for ephemeral
I0307 18:28:26.363962 1 shared_informer.go:280] Caches are synced for daemon sets
I0307 18:28:26.364187 1 shared_informer.go:280] Caches are synced for GC
I0307 18:28:26.369119 1 shared_informer.go:280] Caches are synced for endpoint
I0307 18:28:26.372783 1 shared_informer.go:280] Caches are synced for PVC protection
I0307 18:28:26.385561 1 shared_informer.go:280] Caches are synced for stateful set
I0307 18:28:26.395694 1 shared_informer.go:280] Caches are synced for ReplicationController
I0307 18:28:26.443107 1 shared_informer.go:280] Caches are synced for persistent volume
I0307 18:28:26.783215 1 shared_informer.go:280] Caches are synced for garbage collector
I0307 18:28:26.783336 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0307 18:28:26.789489 1 shared_informer.go:280] Caches are synced for garbage collector
I0307 18:28:48.464881 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-dxpfk"
W0307 18:28:51.475477 1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m03 node
W0307 18:28:52.260934 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-260000-m02" does not exist
W0307 18:28:52.261307 1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m03 node
I0307 18:28:52.266304 1 range_allocator.go:372] Set node multinode-260000-m02 PodCIDR to [10.244.1.0/24]
W0307 18:28:57.920623 1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
I0307 18:29:01.349195 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-dmrds" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-dmrds"
I0307 18:29:06.355165 1 event.go:294] "Event occurred" object="multinode-260000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-260000-m03 status is now: NodeNotReady"
W0307 18:29:06.355726 1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
I0307 18:29:06.363745 1 event.go:294] "Event occurred" object="kube-system/kindnet-j5gj9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0307 18:29:06.369443 1 event.go:294] "Event occurred" object="kube-system/kube-proxy-q8cm8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
*
* ==> kube-proxy [808d83da8d84] <==
* I0307 18:18:42.494750 1 node.go:163] Successfully retrieved node IP: 192.168.64.12
I0307 18:18:42.494821 1 server_others.go:109] "Detected node IP" address="192.168.64.12"
I0307 18:18:42.494837 1 server_others.go:535] "Using iptables proxy"
I0307 18:18:42.540347 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0307 18:18:42.540362 1 server_others.go:176] "Using iptables Proxier"
I0307 18:18:42.540384 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0307 18:18:42.540579 1 server.go:655] "Version info" version="v1.26.2"
I0307 18:18:42.540586 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0307 18:18:42.542139 1 config.go:317] "Starting service config controller"
I0307 18:18:42.542152 1 shared_informer.go:273] Waiting for caches to sync for service config
I0307 18:18:42.542165 1 config.go:226] "Starting endpoint slice config controller"
I0307 18:18:42.542168 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0307 18:18:42.542805 1 config.go:444] "Starting node config controller"
I0307 18:18:42.542810 1 shared_informer.go:273] Waiting for caches to sync for node config
I0307 18:18:42.642774 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0307 18:18:42.642783 1 shared_informer.go:280] Caches are synced for service config
I0307 18:18:42.642913 1 shared_informer.go:280] Caches are synced for node config
*
* ==> kube-proxy [b5a7ee396dc6] <==
* I0307 18:28:15.472153 1 node.go:163] Successfully retrieved node IP: 192.168.64.12
I0307 18:28:15.472708 1 server_others.go:109] "Detected node IP" address="192.168.64.12"
I0307 18:28:15.472783 1 server_others.go:535] "Using iptables proxy"
I0307 18:28:15.556618 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0307 18:28:15.556713 1 server_others.go:176] "Using iptables Proxier"
I0307 18:28:15.557566 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0307 18:28:15.558340 1 server.go:655] "Version info" version="v1.26.2"
I0307 18:28:15.558371 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0307 18:28:15.560891 1 config.go:317] "Starting service config controller"
I0307 18:28:15.561767 1 shared_informer.go:273] Waiting for caches to sync for service config
I0307 18:28:15.561822 1 config.go:226] "Starting endpoint slice config controller"
I0307 18:28:15.561848 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0307 18:28:15.565094 1 config.go:444] "Starting node config controller"
I0307 18:28:15.565123 1 shared_informer.go:273] Waiting for caches to sync for node config
I0307 18:28:15.662569 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0307 18:28:15.662641 1 shared_informer.go:280] Caches are synced for service config
I0307 18:28:15.665665 1 shared_informer.go:280] Caches are synced for node config
*
* ==> kube-scheduler [10d167b9d987] <==
* E0307 18:18:25.562466 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0307 18:18:25.562560 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0307 18:18:25.562674 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0307 18:18:25.562782 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0307 18:18:25.562847 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0307 18:18:25.563105 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0307 18:18:25.563202 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0307 18:18:25.563628 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0307 18:18:25.563744 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0307 18:18:26.434289 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0307 18:18:26.434408 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0307 18:18:26.442009 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0307 18:18:26.442097 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0307 18:18:26.456512 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0307 18:18:26.456552 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0307 18:18:26.488229 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0307 18:18:26.488359 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0307 18:18:26.563741 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0307 18:18:26.564207 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0307 18:18:26.667408 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0307 18:18:26.667448 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
I0307 18:18:26.955242 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0307 18:26:59.644696 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0307 18:26:59.645252 1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
E0307 18:26:59.645273 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [26cf0a14d586] <==
* I0307 18:28:11.367805 1 serving.go:348] Generated self-signed cert in-memory
W0307 18:28:13.150328 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0307 18:28:13.150362 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0307 18:28:13.150371 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
W0307 18:28:13.150376 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0307 18:28:13.235320 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.2"
I0307 18:28:13.235372 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0307 18:28:13.239708 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0307 18:28:13.239806 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0307 18:28:13.241227 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0307 18:28:13.242006 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0307 18:28:13.341779 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Tue 2023-03-07 18:27:25 UTC, ends at Tue 2023-03-07 18:30:17 UTC. --
Mar 07 18:28:17 multinode-260000 kubelet[1239]: E0307 18:28:17.157728 1239 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Mar 07 18:28:17 multinode-260000 kubelet[1239]: E0307 18:28:17.157918 1239 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6-config-volume podName:c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6 nodeName:}" failed. No retries permitted until 2023-03-07 18:28:21.157899363 +0000 UTC m=+12.844186086 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6-config-volume") pod "coredns-787d4945fb-x8m8v" (UID: "c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6") : object "kube-system"/"coredns" not registered
Mar 07 18:28:17 multinode-260000 kubelet[1239]: I0307 18:28:17.243165 1239 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdbc154f16c5ecae8e4cd1c88a503e2325c5dd83beb13fd64441377c4f9e7ec0"
Mar 07 18:28:17 multinode-260000 kubelet[1239]: E0307 18:28:17.862532 1239 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Mar 07 18:28:17 multinode-260000 kubelet[1239]: E0307 18:28:17.862660 1239 projected.go:198] Error preparing data for projected volume kube-api-access-qh9hd for pod default/busybox-6b86dd6d48-tw9p8: object "default"/"kube-root-ca.crt" not registered
Mar 07 18:28:17 multinode-260000 kubelet[1239]: E0307 18:28:17.862775 1239 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00822b5c-f30a-4e57-9efd-48e3cae67dd8-kube-api-access-qh9hd podName:00822b5c-f30a-4e57-9efd-48e3cae67dd8 nodeName:}" failed. No retries permitted until 2023-03-07 18:28:21.862765021 +0000 UTC m=+13.549051734 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qh9hd" (UniqueName: "kubernetes.io/projected/00822b5c-f30a-4e57-9efd-48e3cae67dd8-kube-api-access-qh9hd") pod "busybox-6b86dd6d48-tw9p8" (UID: "00822b5c-f30a-4e57-9efd-48e3cae67dd8") : object "default"/"kube-root-ca.crt" not registered
Mar 07 18:28:18 multinode-260000 kubelet[1239]: E0307 18:28:18.278405 1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-x8m8v" podUID=c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6
Mar 07 18:28:18 multinode-260000 kubelet[1239]: E0307 18:28:18.279104 1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-tw9p8" podUID=00822b5c-f30a-4e57-9efd-48e3cae67dd8
Mar 07 18:28:18 multinode-260000 kubelet[1239]: E0307 18:28:18.551174 1239 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
Mar 07 18:28:19 multinode-260000 kubelet[1239]: E0307 18:28:19.553176 1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-x8m8v" podUID=c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6
Mar 07 18:28:19 multinode-260000 kubelet[1239]: E0307 18:28:19.553320 1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-tw9p8" podUID=00822b5c-f30a-4e57-9efd-48e3cae67dd8
Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.191160 1239 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.191491 1239 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6-config-volume podName:c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6 nodeName:}" failed. No retries permitted until 2023-03-07 18:28:29.191480722 +0000 UTC m=+20.877767432 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6-config-volume") pod "coredns-787d4945fb-x8m8v" (UID: "c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6") : object "kube-system"/"coredns" not registered
Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.552920 1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-x8m8v" podUID=c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6
Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.553059 1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-tw9p8" podUID=00822b5c-f30a-4e57-9efd-48e3cae67dd8
Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.897966 1239 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.898129 1239 projected.go:198] Error preparing data for projected volume kube-api-access-qh9hd for pod default/busybox-6b86dd6d48-tw9p8: object "default"/"kube-root-ca.crt" not registered
Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.898305 1239 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00822b5c-f30a-4e57-9efd-48e3cae67dd8-kube-api-access-qh9hd podName:00822b5c-f30a-4e57-9efd-48e3cae67dd8 nodeName:}" failed. No retries permitted until 2023-03-07 18:28:29.898286181 +0000 UTC m=+21.584572909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qh9hd" (UniqueName: "kubernetes.io/projected/00822b5c-f30a-4e57-9efd-48e3cae67dd8-kube-api-access-qh9hd") pod "busybox-6b86dd6d48-tw9p8" (UID: "00822b5c-f30a-4e57-9efd-48e3cae67dd8") : object "default"/"kube-root-ca.crt" not registered
Mar 07 18:28:23 multinode-260000 kubelet[1239]: E0307 18:28:23.554580 1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-x8m8v" podUID=c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6
Mar 07 18:28:23 multinode-260000 kubelet[1239]: E0307 18:28:23.554943 1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-tw9p8" podUID=00822b5c-f30a-4e57-9efd-48e3cae67dd8
Mar 07 18:28:30 multinode-260000 kubelet[1239]: I0307 18:28:30.592014 1239 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76d3e91590c9da6205b8d32d4b932be8104ea717355bd7711e406514dad7dd9"
Mar 07 18:28:44 multinode-260000 kubelet[1239]: I0307 18:28:44.723203 1239 scope.go:115] "RemoveContainer" containerID="c4559ff3518da6f34f4cdc748b8c7c12071cc25ff90faaec5b6ea9e714e7aba4"
Mar 07 18:28:44 multinode-260000 kubelet[1239]: I0307 18:28:44.723439 1239 scope.go:115] "RemoveContainer" containerID="fb55a8f7e7acf79ab5acef082e9687db3c86b8350d3822b8162a5264fa8a8737"
Mar 07 18:28:44 multinode-260000 kubelet[1239]: E0307 18:28:44.723568 1239 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(0b88c317-8e90-4927-b4f8-cae5597b5dc8)\"" pod="kube-system/storage-provisioner" podUID=0b88c317-8e90-4927-b4f8-cae5597b5dc8
Mar 07 18:28:57 multinode-260000 kubelet[1239]: I0307 18:28:57.552994 1239 scope.go:115] "RemoveContainer" containerID="fb55a8f7e7acf79ab5acef082e9687db3c86b8350d3822b8162a5264fa8a8737"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-260000 -n multinode-260000
helpers_test.go:261: (dbg) Run: kubectl --context multinode-260000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-6b86dd6d48-dxpfk
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context multinode-260000 describe pod busybox-6b86dd6d48-dxpfk
helpers_test.go:282: (dbg) kubectl --context multinode-260000 describe pod busybox-6b86dd6d48-dxpfk:
-- stdout --
Name: busybox-6b86dd6d48-dxpfk
Namespace: default
Priority: 0
Service Account: default
Node: multinode-260000-m03/
Labels: app=busybox
pod-template-hash=6b86dd6d48
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/busybox-6b86dd6d48
Containers:
busybox:
Image: gcr.io/k8s-minikube/busybox:1.28
Port: <none>
Host Port: <none>
Command:
sleep
3600
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rqb2s (ro)
Conditions:
Type Status
PodScheduled True
Volumes:
kube-api-access-rqb2s:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 89s default-scheduler Successfully assigned default/busybox-6b86dd6d48-dxpfk to multinode-260000-m03
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (198.98s)