Test Report: Hyperkit_macOS 15985

                    
                      49d57361cbdf0d306690482a173cc4589bc1e918:2023-03-07:28216
                    
                

Test fail (3/306)

Order failed test Duration
210 TestMultiNode/serial/RestartKeepsNodes 198.98
224 TestRunningBinaryUpgrade 116.72
299 TestNetworkPlugins/group/flannel/Start 21.17
x
+
TestMultiNode/serial/RestartKeepsNodes (198.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-260000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-260000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-260000: (18.397689872s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-260000 --wait=true -v=8 --alsologtostderr
E0307 10:28:08.338508    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:29:18.411413    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:29:36.859048    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-260000 --wait=true -v=8 --alsologtostderr: exit status 90 (2m56.631240642s)

                                                
                                                
-- stdout --
	* [multinode-260000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting control plane node multinode-260000 in cluster multinode-260000
	* Restarting existing hyperkit VM for "multinode-260000" ...
	* Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-260000-m02 in cluster multinode-260000
	* Restarting existing hyperkit VM for "multinode-260000-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.64.12
	* Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
	  - env NO_PROXY=192.168.64.12
	* Verifying Kubernetes components...
	* Starting worker node multinode-260000-m03 in cluster multinode-260000
	* Restarting existing hyperkit VM for "multinode-260000-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.64.12,192.168.64.13
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:27:17.701567    7018 out.go:296] Setting OutFile to fd 1 ...
	I0307 10:27:17.701766    7018 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:27:17.701771    7018 out.go:309] Setting ErrFile to fd 2...
	I0307 10:27:17.701775    7018 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:27:17.701881    7018 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
	I0307 10:27:17.703156    7018 out.go:303] Setting JSON to false
	I0307 10:27:17.723710    7018 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3412,"bootTime":1678210225,"procs":381,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:27:17.723849    7018 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0307 10:27:17.767920    7018 out.go:177] * [multinode-260000] minikube v1.29.0 on Darwin 13.2.1
	I0307 10:27:17.789379    7018 notify.go:220] Checking for updates...
	I0307 10:27:17.811044    7018 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 10:27:17.832029    7018 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:27:17.853161    7018 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:27:17.875122    7018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:27:17.896016    7018 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	I0307 10:27:17.917197    7018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:27:17.939813    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:27:17.939897    7018 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 10:27:17.940536    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:27:17.940612    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:27:17.948145    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51638
	I0307 10:27:17.948508    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:27:17.948945    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:27:17.948957    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:27:17.949170    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:27:17.949257    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:17.976910    7018 out.go:177] * Using the hyperkit driver based on existing profile
	I0307 10:27:18.019030    7018 start.go:296] selected driver: hyperkit
	I0307 10:27:18.019085    7018 start.go:857] validating driver "hyperkit" against &{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false
inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP:}
	I0307 10:27:18.019304    7018 start.go:868] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:27:18.019411    7018 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:27:18.019612    7018 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15985-3430/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0307 10:27:18.027551    7018 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.29.0
	I0307 10:27:18.031921    7018 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:27:18.031941    7018 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0307 10:27:18.034844    7018 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:27:18.034876    7018 cni.go:84] Creating CNI manager for ""
	I0307 10:27:18.034887    7018 cni.go:136] 3 nodes found, recommending kindnet
	I0307 10:27:18.034896    7018 start_flags.go:319] config:
	{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false
kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 10:27:18.035029    7018 iso.go:125] acquiring lock: {Name:mk7e0ac9e85418e0580033b84b7097185a725e89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:27:18.076950    7018 out.go:177] * Starting control plane node multinode-260000 in cluster multinode-260000
	I0307 10:27:18.098026    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:27:18.098116    7018 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0307 10:27:18.098148    7018 cache.go:57] Caching tarball of preloaded images
	I0307 10:27:18.098313    7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 10:27:18.098333    7018 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 10:27:18.098530    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:27:18.099358    7018 cache.go:193] Successfully downloaded all kic artifacts
	I0307 10:27:18.099407    7018 start.go:364] acquiring machines lock for multinode-260000: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:27:18.099512    7018 start.go:368] acquired machines lock for "multinode-260000" in 86.293µs
	I0307 10:27:18.099554    7018 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:27:18.099566    7018 fix.go:55] fixHost starting: 
	I0307 10:27:18.100062    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:27:18.100091    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:27:18.107480    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51640
	I0307 10:27:18.107803    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:27:18.108127    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:27:18.108137    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:27:18.108326    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:27:18.108443    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:18.108543    7018 main.go:141] libmachine: (multinode-260000) Calling .GetState
	I0307 10:27:18.108624    7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:27:18.108709    7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid from json: 6235
	I0307 10:27:18.109465    7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid 6235 missing from process table
	I0307 10:27:18.109498    7018 fix.go:103] recreateIfNeeded on multinode-260000: state=Stopped err=<nil>
	I0307 10:27:18.109518    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	W0307 10:27:18.109599    7018 fix.go:129] unexpected machine state, will restart: <nil>
	I0307 10:27:18.130859    7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000" ...
	I0307 10:27:18.151952    7018 main.go:141] libmachine: (multinode-260000) Calling .Start
	I0307 10:27:18.152162    7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:27:18.152193    7018 main.go:141] libmachine: (multinode-260000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid
	I0307 10:27:18.153359    7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid 6235 missing from process table
	I0307 10:27:18.153369    7018 main.go:141] libmachine: (multinode-260000) DBG | pid 6235 is in state "Stopped"
	I0307 10:27:18.153384    7018 main.go:141] libmachine: (multinode-260000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid...
	I0307 10:27:18.153520    7018 main.go:141] libmachine: (multinode-260000) DBG | Using UUID 6086a850-bd14-11ed-9c3c-149d997fca88
	I0307 10:27:18.261699    7018 main.go:141] libmachine: (multinode-260000) DBG | Generated MAC f2:4e:cd:75:18:a7
	I0307 10:27:18.261738    7018 main.go:141] libmachine: (multinode-260000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
	I0307 10:27:18.261843    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6086a850-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ecbd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0307 10:27:18.261893    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6086a850-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ecbd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0307 10:27:18.261955    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6086a850-bd14-11ed-9c3c-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/multinode-260000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage,/Users/jenkins/minikube-integration/1598
5-3430/.minikube/machines/multinode-260000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
	I0307 10:27:18.262040    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6086a850-bd14-11ed-9c3c-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/multinode-260000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
	I0307 10:27:18.262064    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0307 10:27:18.263449    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Pid is 7033
	I0307 10:27:18.263845    7018 main.go:141] libmachine: (multinode-260000) DBG | Attempt 0
	I0307 10:27:18.263868    7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:27:18.263948    7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid from json: 7033
	I0307 10:27:18.265382    7018 main.go:141] libmachine: (multinode-260000) DBG | Searching for f2:4e:cd:75:18:a7 in /var/db/dhcpd_leases ...
	I0307 10:27:18.265430    7018 main.go:141] libmachine: (multinode-260000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0307 10:27:18.265476    7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
	I0307 10:27:18.265490    7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
	I0307 10:27:18.265519    7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d194}
	I0307 10:27:18.265530    7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d15a}
	I0307 10:27:18.265540    7018 main.go:141] libmachine: (multinode-260000) DBG | Found match: f2:4e:cd:75:18:a7
	I0307 10:27:18.265548    7018 main.go:141] libmachine: (multinode-260000) DBG | IP: 192.168.64.12
	I0307 10:27:18.265590    7018 main.go:141] libmachine: (multinode-260000) Calling .GetConfigRaw
	I0307 10:27:18.266196    7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
	I0307 10:27:18.266384    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:27:18.266657    7018 machine.go:88] provisioning docker machine ...
	I0307 10:27:18.266667    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:18.266773    7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
	I0307 10:27:18.266878    7018 buildroot.go:166] provisioning hostname "multinode-260000"
	I0307 10:27:18.266892    7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
	I0307 10:27:18.266989    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:18.267073    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:18.267172    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:18.267250    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:18.267341    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:18.267461    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:18.267830    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:18.267839    7018 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-260000 && echo "multinode-260000" | sudo tee /etc/hostname
	I0307 10:27:18.269902    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0307 10:27:18.319277    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0307 10:27:18.319873    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:27:18.319886    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:27:18.319904    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:27:18.319918    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:27:18.674514    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0307 10:27:18.674532    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0307 10:27:18.778516    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:27:18.778535    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:27:18.778566    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:27:18.778585    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:27:18.779423    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0307 10:27:18.779434    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0307 10:27:23.282731    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0307 10:27:23.282756    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0307 10:27:23.282762    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0307 10:27:53.345501    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000
	
	I0307 10:27:53.345516    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.345641    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:53.345737    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.345814    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.345897    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:53.346017    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:53.346336    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:53.346349    7018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-260000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-260000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:27:53.408248    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:27:53.408267    7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
	I0307 10:27:53.408279    7018 buildroot.go:174] setting up certificates
	I0307 10:27:53.408288    7018 provision.go:83] configureAuth start
	I0307 10:27:53.408298    7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
	I0307 10:27:53.408431    7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
	I0307 10:27:53.408534    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.408622    7018 provision.go:138] copyHostCerts
	I0307 10:27:53.408658    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:27:53.408716    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
	I0307 10:27:53.408724    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:27:53.408836    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
	I0307 10:27:53.409016    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:27:53.409051    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
	I0307 10:27:53.409056    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:27:53.409119    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
	I0307 10:27:53.409268    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:27:53.409298    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
	I0307 10:27:53.409303    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:27:53.409364    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
	I0307 10:27:53.409496    7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000 san=[192.168.64.12 192.168.64.12 localhost 127.0.0.1 minikube multinode-260000]
	I0307 10:27:53.471318    7018 provision.go:172] copyRemoteCerts
	I0307 10:27:53.471371    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:27:53.471386    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.471501    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:53.471590    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.471685    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:53.471784    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:27:53.506343    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0307 10:27:53.506415    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 10:27:53.522448    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0307 10:27:53.522505    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 10:27:53.538178    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0307 10:27:53.538241    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 10:27:53.554443    7018 provision.go:86] duration metric: configureAuth took 146.138879ms
	I0307 10:27:53.554456    7018 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:27:53.554627    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:27:53.554640    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:53.554773    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.554871    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:53.554956    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.555028    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.555105    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:53.555212    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:53.555523    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:53.555532    7018 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:27:53.611701    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:27:53.611715    7018 buildroot.go:70] root file system type: tmpfs
	I0307 10:27:53.611791    7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:27:53.611806    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.611930    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:53.612020    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.612103    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.612184    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:53.612317    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:53.612630    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:53.612673    7018 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:27:53.678288    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:27:53.678311    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.678443    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:53.678532    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.678617    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.678712    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:53.678844    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:53.679161    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:53.679175    7018 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:27:54.321619    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 10:27:54.321632    7018 machine.go:91] provisioned docker machine in 36.054802092s
	I0307 10:27:54.321643    7018 start.go:300] post-start starting for "multinode-260000" (driver="hyperkit")
	I0307 10:27:54.321648    7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:27:54.321659    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:54.321839    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:27:54.321852    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:54.321961    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:54.322042    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:54.322149    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:54.322246    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:27:54.357925    7018 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:27:54.360302    7018 command_runner.go:130] > NAME=Buildroot
	I0307 10:27:54.360311    7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
	I0307 10:27:54.360321    7018 command_runner.go:130] > ID=buildroot
	I0307 10:27:54.360325    7018 command_runner.go:130] > VERSION_ID=2021.02.12
	I0307 10:27:54.360330    7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0307 10:27:54.360498    7018 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 10:27:54.360509    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
	I0307 10:27:54.360589    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
	I0307 10:27:54.360737    7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
	I0307 10:27:54.360743    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
	I0307 10:27:54.360917    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:27:54.366509    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:27:54.382252    7018 start.go:303] post-start completed in 60.601074ms
	I0307 10:27:54.382265    7018 fix.go:57] fixHost completed within 36.282535453s
	I0307 10:27:54.382281    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:54.382411    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:54.382494    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:54.382592    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:54.382687    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:54.382812    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:54.383114    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:54.383122    7018 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 10:27:54.438352    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213674.566046378
	
	I0307 10:27:54.438363    7018 fix.go:207] guest clock: 1678213674.566046378
	I0307 10:27:54.438368    7018 fix.go:220] Guest: 2023-03-07 10:27:54.566046378 -0800 PST Remote: 2023-03-07 10:27:54.382269 -0800 PST m=+36.717005002 (delta=183.777378ms)
	I0307 10:27:54.438390    7018 fix.go:191] guest clock delta is within tolerance: 183.777378ms
	I0307 10:27:54.438395    7018 start.go:83] releasing machines lock for "multinode-260000", held for 36.33870613s
	I0307 10:27:54.438412    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:54.438533    7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
	I0307 10:27:54.438635    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:54.438919    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:54.439021    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:54.439107    7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 10:27:54.439131    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:54.439139    7018 ssh_runner.go:195] Run: cat /version.json
	I0307 10:27:54.439150    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:54.439230    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:54.439270    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:54.439355    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:54.439367    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:54.439464    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:54.439484    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:54.439556    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:27:54.439569    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:27:54.469202    7018 command_runner.go:130] > {"iso_version": "v1.29.0-1677261626-15923", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "d5f8b7c14d0e3cd88db476786b15ed1c8f7b9a62"}
	I0307 10:27:54.469345    7018 ssh_runner.go:195] Run: systemctl --version
	I0307 10:27:54.473110    7018 command_runner.go:130] > systemd 247 (247)
	I0307 10:27:54.473123    7018 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0307 10:27:54.510321    7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 10:27:54.511264    7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 10:27:54.515706    7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0307 10:27:54.515766    7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:27:54.515808    7018 ssh_runner.go:195] Run: which cri-dockerd
	I0307 10:27:54.518180    7018 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 10:27:54.518271    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 10:27:54.524837    7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 10:27:54.535806    7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 10:27:54.546514    7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0307 10:27:54.546672    7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 10:27:54.546690    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:27:54.546786    7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:27:54.561856    7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
	I0307 10:27:54.561870    7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 10:27:54.561875    7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 10:27:54.561879    7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 10:27:54.561885    7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 10:27:54.561889    7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 10:27:54.561893    7018 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 10:27:54.561898    7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 10:27:54.561902    7018 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0307 10:27:54.561906    7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:27:54.561912    7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0307 10:27:54.562858    7018 docker.go:630] Got preloaded images: -- stdout --
	kindest/kindnetd:v20230227-15197099
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0307 10:27:54.562875    7018 docker.go:560] Images already preloaded, skipping extraction
	I0307 10:27:54.562881    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:27:54.562957    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:27:54.574839    7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:27:54.574851    7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:27:54.575174    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 10:27:54.582305    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:27:54.589279    7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:27:54.589317    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:27:54.596289    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:27:54.603219    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:27:54.610180    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:27:54.617267    7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:27:54.624610    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:27:54.631553    7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:27:54.637786    7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 10:27:54.637952    7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:27:54.644168    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:27:54.724435    7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:27:54.736384    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:27:54.736451    7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:27:54.745963    7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0307 10:27:54.745979    7018 command_runner.go:130] > [Unit]
	I0307 10:27:54.745984    7018 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 10:27:54.745988    7018 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 10:27:54.745993    7018 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0307 10:27:54.745999    7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0307 10:27:54.746004    7018 command_runner.go:130] > StartLimitBurst=3
	I0307 10:27:54.746007    7018 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 10:27:54.746011    7018 command_runner.go:130] > [Service]
	I0307 10:27:54.746014    7018 command_runner.go:130] > Type=notify
	I0307 10:27:54.746017    7018 command_runner.go:130] > Restart=on-failure
	I0307 10:27:54.746024    7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 10:27:54.746040    7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 10:27:54.746047    7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 10:27:54.746053    7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 10:27:54.746068    7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 10:27:54.746075    7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 10:27:54.746081    7018 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 10:27:54.746090    7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 10:27:54.746099    7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 10:27:54.746104    7018 command_runner.go:130] > ExecStart=
	I0307 10:27:54.746114    7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0307 10:27:54.746119    7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 10:27:54.746130    7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 10:27:54.746136    7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 10:27:54.746140    7018 command_runner.go:130] > LimitNOFILE=infinity
	I0307 10:27:54.746143    7018 command_runner.go:130] > LimitNPROC=infinity
	I0307 10:27:54.746147    7018 command_runner.go:130] > LimitCORE=infinity
	I0307 10:27:54.746156    7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 10:27:54.746161    7018 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 10:27:54.746165    7018 command_runner.go:130] > TasksMax=infinity
	I0307 10:27:54.746168    7018 command_runner.go:130] > TimeoutStartSec=0
	I0307 10:27:54.746173    7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 10:27:54.746179    7018 command_runner.go:130] > Delegate=yes
	I0307 10:27:54.746184    7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 10:27:54.746188    7018 command_runner.go:130] > KillMode=process
	I0307 10:27:54.746191    7018 command_runner.go:130] > [Install]
	I0307 10:27:54.746201    7018 command_runner.go:130] > WantedBy=multi-user.target
	I0307 10:27:54.746263    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:27:54.754873    7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:27:54.766931    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:27:54.775320    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:27:54.784274    7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 10:27:54.810077    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:27:54.819002    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:27:54.830417    7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:27:54.830427    7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:27:54.830775    7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:27:54.910530    7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:27:54.991106    7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:27:54.991125    7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 10:27:55.002612    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:27:55.082706    7018 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:27:56.344251    7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.261521172s)
	I0307 10:27:56.344319    7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:27:56.427984    7018 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 10:27:56.518324    7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:27:56.611821    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:27:56.699165    7018 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 10:27:56.710403    7018 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 10:27:56.710477    7018 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 10:27:56.714055    7018 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0307 10:27:56.714067    7018 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0307 10:27:56.714072    7018 command_runner.go:130] > Device: 16h/22d	Inode: 853         Links: 1
	I0307 10:27:56.714079    7018 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0307 10:27:56.714098    7018 command_runner.go:130] > Access: 2023-03-07 18:27:56.836416904 +0000
	I0307 10:27:56.714105    7018 command_runner.go:130] > Modify: 2023-03-07 18:27:56.836416904 +0000
	I0307 10:27:56.714109    7018 command_runner.go:130] > Change: 2023-03-07 18:27:56.838416903 +0000
	I0307 10:27:56.714113    7018 command_runner.go:130] >  Birth: -
	I0307 10:27:56.714136    7018 start.go:553] Will wait 60s for crictl version
	I0307 10:27:56.714180    7018 ssh_runner.go:195] Run: which crictl
	I0307 10:27:56.716256    7018 command_runner.go:130] > /usr/bin/crictl
	I0307 10:27:56.716479    7018 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 10:27:56.782605    7018 command_runner.go:130] > Version:  0.1.0
	I0307 10:27:56.782630    7018 command_runner.go:130] > RuntimeName:  docker
	I0307 10:27:56.782659    7018 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0307 10:27:56.782788    7018 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0307 10:27:56.786182    7018 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0307 10:27:56.786249    7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:27:56.806368    7018 command_runner.go:130] > 20.10.23
	I0307 10:27:56.807205    7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:27:56.827016    7018 command_runner.go:130] > 20.10.23
	I0307 10:27:56.870119    7018 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
	I0307 10:27:56.870166    7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
	I0307 10:27:56.870574    7018 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0307 10:27:56.874782    7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:27:56.882699    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:27:56.882759    7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:27:56.898148    7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
	I0307 10:27:56.898160    7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 10:27:56.898164    7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 10:27:56.898169    7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 10:27:56.898172    7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 10:27:56.898176    7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 10:27:56.898180    7018 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 10:27:56.898184    7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 10:27:56.898188    7018 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0307 10:27:56.898197    7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:27:56.898202    7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0307 10:27:56.898858    7018 docker.go:630] Got preloaded images: -- stdout --
	kindest/kindnetd:v20230227-15197099
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0307 10:27:56.898867    7018 docker.go:560] Images already preloaded, skipping extraction
	I0307 10:27:56.898945    7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:27:56.913839    7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
	I0307 10:27:56.913851    7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 10:27:56.913855    7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 10:27:56.913869    7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 10:27:56.913873    7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 10:27:56.913877    7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 10:27:56.913881    7018 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 10:27:56.913885    7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 10:27:56.913889    7018 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0307 10:27:56.913893    7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:27:56.913900    7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0307 10:27:56.914547    7018 docker.go:630] Got preloaded images: -- stdout --
	kindest/kindnetd:v20230227-15197099
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0307 10:27:56.914562    7018 cache_images.go:84] Images are preloaded, skipping loading
	I0307 10:27:56.914636    7018 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 10:27:56.935563    7018 command_runner.go:130] > cgroupfs
	I0307 10:27:56.936272    7018 cni.go:84] Creating CNI manager for ""
	I0307 10:27:56.936282    7018 cni.go:136] 3 nodes found, recommending kindnet
	I0307 10:27:56.936296    7018 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0307 10:27:56.936310    7018 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.12 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-260000 NodeName:multinode-260000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0307 10:27:56.936405    7018 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-260000"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 10:27:56.936460    7018 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-260000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0307 10:27:56.936536    7018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0307 10:27:56.943109    7018 command_runner.go:130] > kubeadm
	I0307 10:27:56.943116    7018 command_runner.go:130] > kubectl
	I0307 10:27:56.943120    7018 command_runner.go:130] > kubelet
	I0307 10:27:56.943263    7018 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 10:27:56.943308    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 10:27:56.949592    7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (449 bytes)
	I0307 10:27:56.960366    7018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 10:27:56.970938    7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I0307 10:27:56.982338    7018 ssh_runner.go:195] Run: grep 192.168.64.12	control-plane.minikube.internal$ /etc/hosts
	I0307 10:27:56.984586    7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:27:56.991939    7018 certs.go:56] Setting up /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000 for IP: 192.168.64.12
	I0307 10:27:56.991953    7018 certs.go:186] acquiring lock for shared ca certs: {Name:mk21aa92235e3b083ba3cf4a52527e5734aca22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:27:56.992091    7018 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key
	I0307 10:27:56.992154    7018 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key
	I0307 10:27:56.992245    7018 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key
	I0307 10:27:56.992309    7018 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key.546ed142
	I0307 10:27:56.992376    7018 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key
	I0307 10:27:56.992385    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 10:27:56.992414    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 10:27:56.992439    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 10:27:56.992461    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 10:27:56.992479    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 10:27:56.992497    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0307 10:27:56.992518    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 10:27:56.992536    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 10:27:56.992623    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem (1338 bytes)
	W0307 10:27:56.992661    7018 certs.go:397] ignoring /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903_empty.pem, impossibly tiny 0 bytes
	I0307 10:27:56.992672    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 10:27:56.992706    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem (1082 bytes)
	I0307 10:27:56.992736    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem (1123 bytes)
	I0307 10:27:56.992769    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem (1675 bytes)
	I0307 10:27:56.992838    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:27:56.992873    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:27:56.992892    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem -> /usr/share/ca-certificates/3903.pem
	I0307 10:27:56.992913    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /usr/share/ca-certificates/39032.pem
	I0307 10:27:56.993367    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0307 10:27:57.008967    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 10:27:57.024057    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 10:27:57.039253    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 10:27:57.054424    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 10:27:57.069714    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 10:27:57.085285    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 10:27:57.100487    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 10:27:57.116166    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 10:27:57.131487    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem --> /usr/share/ca-certificates/3903.pem (1338 bytes)
	I0307 10:27:57.146782    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /usr/share/ca-certificates/39032.pem (1708 bytes)
	I0307 10:27:57.161670    7018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 10:27:57.172684    7018 ssh_runner.go:195] Run: openssl version
	I0307 10:27:57.175822    7018 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0307 10:27:57.176031    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/39032.pem && ln -fs /usr/share/ca-certificates/39032.pem /etc/ssl/certs/39032.pem"
	I0307 10:27:57.182397    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/39032.pem
	I0307 10:27:57.185195    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 18:06 /usr/share/ca-certificates/39032.pem
	I0307 10:27:57.185263    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:06 /usr/share/ca-certificates/39032.pem
	I0307 10:27:57.185306    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/39032.pem
	I0307 10:27:57.188613    7018 command_runner.go:130] > 3ec20f2e
	I0307 10:27:57.188881    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/39032.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 10:27:57.195955    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 10:27:57.203206    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:27:57.205892    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:27:57.206086    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:27:57.206121    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:27:57.209355    7018 command_runner.go:130] > b5213941
	I0307 10:27:57.209587    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 10:27:57.216626    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3903.pem && ln -fs /usr/share/ca-certificates/3903.pem /etc/ssl/certs/3903.pem"
	I0307 10:27:57.223521    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3903.pem
	I0307 10:27:57.226194    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 18:06 /usr/share/ca-certificates/3903.pem
	I0307 10:27:57.226381    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:06 /usr/share/ca-certificates/3903.pem
	I0307 10:27:57.226417    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3903.pem
	I0307 10:27:57.229589    7018 command_runner.go:130] > 51391683
	I0307 10:27:57.229807    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3903.pem /etc/ssl/certs/51391683.0"
	I0307 10:27:57.236882    7018 kubeadm.go:401] StartCluster: {Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false in
gress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP:}
	I0307 10:27:57.236992    7018 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 10:27:57.252692    7018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 10:27:57.259210    7018 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0307 10:27:57.259222    7018 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0307 10:27:57.259230    7018 command_runner.go:130] > /var/lib/minikube/etcd:
	I0307 10:27:57.259234    7018 command_runner.go:130] > member
	I0307 10:27:57.259381    7018 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0307 10:27:57.259400    7018 kubeadm.go:633] restartCluster start
	I0307 10:27:57.259443    7018 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 10:27:57.266382    7018 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:57.266677    7018 kubeconfig.go:135] verify returned: extract IP: "multinode-260000" does not appear in /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:27:57.266753    7018 kubeconfig.go:146] "multinode-260000" context is missing from /Users/jenkins/minikube-integration/15985-3430/kubeconfig - will repair!
	I0307 10:27:57.266945    7018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15985-3430/kubeconfig: {Name:mkea569ea3041d84fd3aeaa788f308c9891aa7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:27:57.267393    7018 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:27:57.267600    7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:27:57.268098    7018 cert_rotation.go:137] Starting client certificate rotation controller
	I0307 10:27:57.268266    7018 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 10:27:57.274410    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:57.274450    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:57.282537    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:57.783579    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:57.783768    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:57.794313    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:58.283596    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:58.283730    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:58.294644    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:58.782684    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:58.782873    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:58.793430    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:59.283543    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:59.283649    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:59.294225    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:59.782887    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:59.783019    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:59.793607    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:00.282689    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:00.282922    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:00.292782    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:00.784107    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:00.784212    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:00.794376    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:01.283293    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:01.283433    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:01.293684    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:01.783681    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:01.783913    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:01.794869    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:02.283942    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:02.284074    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:02.294517    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:02.782945    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:02.783113    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:02.794006    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:03.284588    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:03.284777    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:03.294981    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:03.783910    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:03.784171    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:03.795492    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:04.283913    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:04.284104    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:04.294550    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:04.784723    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:04.784921    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:04.795506    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:05.284742    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:05.284884    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:05.294924    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:05.784725    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:05.784834    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:05.795470    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:06.284719    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:06.284873    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:06.295722    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:06.784533    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:06.784754    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:06.795131    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:07.284699    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:07.287011    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:07.296334    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:07.296343    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:07.296382    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:07.304816    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:07.304829    7018 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0307 10:28:07.304833    7018 kubeadm.go:1120] stopping kube-system containers ...
	I0307 10:28:07.304891    7018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 10:28:07.321379    7018 command_runner.go:130] > da06b08e5617
	I0307 10:28:07.321390    7018 command_runner.go:130] > c4559ff3518d
	I0307 10:28:07.321394    7018 command_runner.go:130] > 5b66601ca9d1
	I0307 10:28:07.321398    7018 command_runner.go:130] > 0ace7c6cf637
	I0307 10:28:07.321401    7018 command_runner.go:130] > 37e6cf092e1c
	I0307 10:28:07.321411    7018 command_runner.go:130] > ae9d394ad7a7
	I0307 10:28:07.321416    7018 command_runner.go:130] > 808d83da8d84
	I0307 10:28:07.321423    7018 command_runner.go:130] > 1bf0ab9eb4c5
	I0307 10:28:07.321426    7018 command_runner.go:130] > 2243964fbc4d
	I0307 10:28:07.321432    7018 command_runner.go:130] > 3b27eb7db4c2
	I0307 10:28:07.321436    7018 command_runner.go:130] > 10d167b9d987
	I0307 10:28:07.321440    7018 command_runner.go:130] > 6ac51e9516a2
	I0307 10:28:07.321443    7018 command_runner.go:130] > 3e9b5dec9e21
	I0307 10:28:07.321448    7018 command_runner.go:130] > 0721a87b433b
	I0307 10:28:07.321452    7018 command_runner.go:130] > aef4edf5b492
	I0307 10:28:07.321456    7018 command_runner.go:130] > cfcf920b7378
	I0307 10:28:07.322130    7018 docker.go:456] Stopping containers: [da06b08e5617 c4559ff3518d 5b66601ca9d1 0ace7c6cf637 37e6cf092e1c ae9d394ad7a7 808d83da8d84 1bf0ab9eb4c5 2243964fbc4d 3b27eb7db4c2 10d167b9d987 6ac51e9516a2 3e9b5dec9e21 0721a87b433b aef4edf5b492 cfcf920b7378]
	I0307 10:28:07.322197    7018 ssh_runner.go:195] Run: docker stop da06b08e5617 c4559ff3518d 5b66601ca9d1 0ace7c6cf637 37e6cf092e1c ae9d394ad7a7 808d83da8d84 1bf0ab9eb4c5 2243964fbc4d 3b27eb7db4c2 10d167b9d987 6ac51e9516a2 3e9b5dec9e21 0721a87b433b aef4edf5b492 cfcf920b7378
	I0307 10:28:07.338863    7018 command_runner.go:130] > da06b08e5617
	I0307 10:28:07.338874    7018 command_runner.go:130] > c4559ff3518d
	I0307 10:28:07.339268    7018 command_runner.go:130] > 5b66601ca9d1
	I0307 10:28:07.339476    7018 command_runner.go:130] > 0ace7c6cf637
	I0307 10:28:07.339531    7018 command_runner.go:130] > 37e6cf092e1c
	I0307 10:28:07.339608    7018 command_runner.go:130] > ae9d394ad7a7
	I0307 10:28:07.339615    7018 command_runner.go:130] > 808d83da8d84
	I0307 10:28:07.339735    7018 command_runner.go:130] > 1bf0ab9eb4c5
	I0307 10:28:07.339806    7018 command_runner.go:130] > 2243964fbc4d
	I0307 10:28:07.339952    7018 command_runner.go:130] > 3b27eb7db4c2
	I0307 10:28:07.340042    7018 command_runner.go:130] > 10d167b9d987
	I0307 10:28:07.340172    7018 command_runner.go:130] > 6ac51e9516a2
	I0307 10:28:07.340231    7018 command_runner.go:130] > 3e9b5dec9e21
	I0307 10:28:07.340237    7018 command_runner.go:130] > 0721a87b433b
	I0307 10:28:07.340416    7018 command_runner.go:130] > aef4edf5b492
	I0307 10:28:07.340541    7018 command_runner.go:130] > cfcf920b7378
	I0307 10:28:07.341444    7018 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 10:28:07.352567    7018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 10:28:07.358762    7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0307 10:28:07.358772    7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0307 10:28:07.358778    7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0307 10:28:07.358784    7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 10:28:07.358923    7018 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 10:28:07.358971    7018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 10:28:07.365297    7018 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0307 10:28:07.365309    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:07.435009    7018 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 10:28:07.435021    7018 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0307 10:28:07.435026    7018 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0307 10:28:07.435249    7018 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 10:28:07.435474    7018 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0307 10:28:07.435692    7018 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0307 10:28:07.436004    7018 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0307 10:28:07.436233    7018 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0307 10:28:07.436509    7018 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0307 10:28:07.436724    7018 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 10:28:07.436961    7018 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 10:28:07.437121    7018 command_runner.go:130] > [certs] Using the existing "sa" key
	I0307 10:28:07.438004    7018 command_runner.go:130] ! W0307 18:28:07.567847    1206 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:07.438020    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:07.477158    7018 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 10:28:07.530979    7018 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 10:28:07.671495    7018 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 10:28:07.806243    7018 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 10:28:08.012059    7018 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 10:28:08.013940    7018 command_runner.go:130] ! W0307 18:28:07.610432    1212 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:08.013962    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:08.064445    7018 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 10:28:08.064458    7018 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 10:28:08.064462    7018 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0307 10:28:08.158176    7018 command_runner.go:130] ! W0307 18:28:08.188188    1218 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:08.158212    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:08.205939    7018 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 10:28:08.205952    7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 10:28:08.207362    7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 10:28:08.208239    7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 10:28:08.211123    7018 command_runner.go:130] ! W0307 18:28:08.337529    1240 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:08.211182    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:08.268874    7018 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 10:28:08.276469    7018 command_runner.go:130] ! W0307 18:28:08.400815    1250 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:08.276569    7018 api_server.go:51] waiting for apiserver process to appear ...
	I0307 10:28:08.276628    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:08.791796    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:09.291418    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:09.790079    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:10.289945    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:10.300303    7018 command_runner.go:130] > 1604
	I0307 10:28:10.300322    7018 api_server.go:71] duration metric: took 2.023748028s to wait for apiserver process to appear ...
	I0307 10:28:10.300332    7018 api_server.go:87] waiting for apiserver healthz status ...
	I0307 10:28:10.300340    7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:28:13.002874    7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0307 10:28:13.002891    7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0307 10:28:13.505043    7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:28:13.511549    7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0307 10:28:13.511564    7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0307 10:28:14.003030    7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:28:14.007459    7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0307 10:28:14.007479    7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0307 10:28:14.504449    7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:28:14.508376    7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0307 10:28:14.508433    7018 round_trippers.go:463] GET https://192.168.64.12:8443/version
	I0307 10:28:14.508438    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:14.508446    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:14.508452    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:14.516136    7018 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 10:28:14.516148    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:14.516154    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:14.516158    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:14.516163    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:14.516168    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:14.516173    7018 round_trippers.go:580]     Content-Length: 263
	I0307 10:28:14.516178    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:14 GMT
	I0307 10:28:14.516185    7018 round_trippers.go:580]     Audit-Id: 364007ce-aca2-49dd-9978-704f40503cf3
	I0307 10:28:14.516202    7018 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.2",
	  "gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
	  "gitTreeState": "clean",
	  "buildDate": "2023-02-22T13:32:22Z",
	  "goVersion": "go1.19.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0307 10:28:14.516246    7018 api_server.go:140] control plane version: v1.26.2
	I0307 10:28:14.516254    7018 api_server.go:130] duration metric: took 4.215899257s to wait for apiserver health ...
	I0307 10:28:14.516265    7018 cni.go:84] Creating CNI manager for ""
	I0307 10:28:14.516271    7018 cni.go:136] 3 nodes found, recommending kindnet
	I0307 10:28:14.538513    7018 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 10:28:14.558703    7018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 10:28:14.565010    7018 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0307 10:28:14.565023    7018 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0307 10:28:14.565030    7018 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0307 10:28:14.565035    7018 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 10:28:14.565040    7018 command_runner.go:130] > Access: 2023-03-07 18:27:25.800133630 +0000
	I0307 10:28:14.565044    7018 command_runner.go:130] > Modify: 2023-02-24 23:58:49.000000000 +0000
	I0307 10:28:14.565049    7018 command_runner.go:130] > Change: 2023-03-07 18:27:24.520133706 +0000
	I0307 10:28:14.565052    7018 command_runner.go:130] >  Birth: -
	I0307 10:28:14.565080    7018 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0307 10:28:14.565086    7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0307 10:28:14.614484    7018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 10:28:15.463255    7018 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0307 10:28:15.465520    7018 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0307 10:28:15.467209    7018 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0307 10:28:15.486465    7018 command_runner.go:130] > daemonset.apps/kindnet configured
	I0307 10:28:15.487964    7018 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 10:28:15.488018    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:15.488023    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.488030    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.488035    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.490928    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:15.490936    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.490945    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.490952    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.490959    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.490966    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.490971    7018 round_trippers.go:580]     Audit-Id: fbf2e35b-55b7-466f-9275-31e56ce04183
	I0307 10:28:15.490978    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.492557    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1032"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81648 chars]
	I0307 10:28:15.495381    7018 system_pods.go:59] 12 kube-system pods found
	I0307 10:28:15.495395    7018 system_pods.go:61] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
	I0307 10:28:15.495400    7018 system_pods.go:61] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
	I0307 10:28:15.495403    7018 system_pods.go:61] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
	I0307 10:28:15.495407    7018 system_pods.go:61] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
	I0307 10:28:15.495411    7018 system_pods.go:61] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
	I0307 10:28:15.495415    7018 system_pods.go:61] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
	I0307 10:28:15.495421    7018 system_pods.go:61] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0307 10:28:15.495425    7018 system_pods.go:61] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
	I0307 10:28:15.495429    7018 system_pods.go:61] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
	I0307 10:28:15.495433    7018 system_pods.go:61] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
	I0307 10:28:15.495437    7018 system_pods.go:61] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
	I0307 10:28:15.495441    7018 system_pods.go:61] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
	I0307 10:28:15.495444    7018 system_pods.go:74] duration metric: took 7.473493ms to wait for pod list to return data ...
	I0307 10:28:15.495451    7018 node_conditions.go:102] verifying NodePressure condition ...
	I0307 10:28:15.495484    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
	I0307 10:28:15.495488    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.495494    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.495499    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.497193    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.497203    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.497209    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.497215    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.497225    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.497237    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.497246    7018 round_trippers.go:580]     Audit-Id: 87494186-1238-43d5-866d-3fb8cf3ac670
	I0307 10:28:15.497252    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.497439    7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1032"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16457 chars]
	I0307 10:28:15.497964    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:15.497980    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:15.497991    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:15.497994    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:15.497998    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:15.498001    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:15.498005    7018 node_conditions.go:105] duration metric: took 2.549988ms to run NodePressure ...
	I0307 10:28:15.498014    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:15.613921    7018 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0307 10:28:15.647095    7018 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0307 10:28:15.648104    7018 command_runner.go:130] ! W0307 18:28:15.688091    2114 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:15.648194    7018 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0307 10:28:15.648246    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0307 10:28:15.648251    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.648257    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.648262    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.650635    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:15.650643    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.650648    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.650653    7018 round_trippers.go:580]     Audit-Id: cb509b59-97eb-4381-8070-69cc8abdab39
	I0307 10:28:15.650664    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.650670    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.650675    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.650683    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.651119    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1034"},"items":[{"metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"288","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28366 chars]
	I0307 10:28:15.651785    7018 kubeadm.go:784] kubelet initialised
	I0307 10:28:15.651796    7018 kubeadm.go:785] duration metric: took 3.59091ms waiting for restarted kubelet to initialise ...
	I0307 10:28:15.651802    7018 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:15.651829    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:15.651834    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.651840    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.651856    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.654797    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:15.654807    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.654812    7018 round_trippers.go:580]     Audit-Id: a9d90e98-0ed7-4ce3-b64a-cc82a3347b6f
	I0307 10:28:15.654817    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.654823    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.654828    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.654832    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.654837    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.656020    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1034"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81648 chars]
	I0307 10:28:15.657761    7018 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:15.657793    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:15.657798    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.657805    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.657811    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.659065    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.659077    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.659085    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.659092    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.659098    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.659104    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.659109    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.659115    7018 round_trippers.go:580]     Audit-Id: eb2db07a-7079-4adb-a12f-c3919e2af0f0
	I0307 10:28:15.659276    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6281 chars]
	I0307 10:28:15.659508    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:15.659514    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.659520    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.659526    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.660689    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.660696    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.660701    7018 round_trippers.go:580]     Audit-Id: 4dd3efdc-1609-4f2d-9ae0-4a842093d527
	I0307 10:28:15.660706    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.660711    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.660717    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.660724    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.660734    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.660828    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:15.660996    7018 pod_ready.go:97] node "multinode-260000" hosting pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.661003    7018 pod_ready.go:81] duration metric: took 3.233228ms waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:15.661009    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.661014    7018 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:15.661036    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
	I0307 10:28:15.661040    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.661046    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.661051    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.662218    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.662226    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.662232    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.662238    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.662244    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.662249    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.662254    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.662258    7018 round_trippers.go:580]     Audit-Id: eeb6ea95-4efc-44d3-86d7-f3e9abc4f441
	I0307 10:28:15.662373    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"288","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5846 chars]
	I0307 10:28:15.662566    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:15.662572    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.662578    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.662586    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.663695    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.663702    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.663708    7018 round_trippers.go:580]     Audit-Id: 0c08723d-f6d6-4c3f-bc19-ce14073bddc8
	I0307 10:28:15.663713    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.663718    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.663724    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.663728    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.663733    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.663841    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:15.664005    7018 pod_ready.go:97] node "multinode-260000" hosting pod "etcd-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.664012    7018 pod_ready.go:81] duration metric: took 2.993408ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:15.664024    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "etcd-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.664031    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:15.664054    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
	I0307 10:28:15.664059    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.664064    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.664070    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.665133    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.665140    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.665145    7018 round_trippers.go:580]     Audit-Id: d8155bb7-ed68-40c6-a807-4b433cb29ded
	I0307 10:28:15.665164    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.665181    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.665188    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.665193    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.665199    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.665314    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"269","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7383 chars]
	I0307 10:28:15.665528    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:15.665534    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.665540    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.665546    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.666728    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.666735    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.666743    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.666752    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.666761    7018 round_trippers.go:580]     Audit-Id: 90f98c95-77ef-4f41-8b0d-68655aa67aef
	I0307 10:28:15.666768    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.666773    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.666778    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.666842    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:15.667008    7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-apiserver-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.667016    7018 pod_ready.go:81] duration metric: took 2.97888ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:15.667021    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-apiserver-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.667025    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:15.688093    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
	I0307 10:28:15.688109    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.688116    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.688121    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.689605    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.689619    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.689626    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.689631    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.689636    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.689642    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.689649    7018 round_trippers.go:580]     Audit-Id: 30247593-c3f9-4f0b-8ec3-84987c2d98e7
	I0307 10:28:15.689656    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.689775    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1031","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7421 chars]
	I0307 10:28:15.888328    7018 request.go:622] Waited for 198.258292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:15.888357    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:15.888362    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.888370    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.888378    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.890719    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:15.890732    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.890738    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.890742    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.890748    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.890753    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:16 GMT
	I0307 10:28:15.890757    7018 round_trippers.go:580]     Audit-Id: 2c7858e8-abf5-4b14-91d6-55537d022b63
	I0307 10:28:15.890762    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.890832    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:15.891019    7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-controller-manager-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.891027    7018 pod_ready.go:81] duration metric: took 223.996649ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:15.891033    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-controller-manager-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.891041    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:16.088078    7018 request.go:622] Waited for 197.006181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
	I0307 10:28:16.088110    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
	I0307 10:28:16.088145    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:16.088152    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:16.088171    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:16.090139    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:16.090148    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:16.090153    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:16 GMT
	I0307 10:28:16.090158    7018 round_trippers.go:580]     Audit-Id: 33bdce0d-afd5-41b3-be54-1778f67df277
	I0307 10:28:16.090163    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:16.090168    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:16.090174    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:16.090180    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:16.090265    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"359","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0307 10:28:16.289549    7018 request.go:622] Waited for 199.030503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:16.289608    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:16.289613    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:16.289619    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:16.289625    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:16.291464    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:16.291474    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:16.291480    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:16.291486    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:16.291491    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:16.291497    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:16 GMT
	I0307 10:28:16.291502    7018 round_trippers.go:580]     Audit-Id: 304d1604-8237-4817-97b8-2398828df2aa
	I0307 10:28:16.291512    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:16.291606    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:16.291814    7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-proxy-8qwhq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:16.291823    7018 pod_ready.go:81] duration metric: took 400.77463ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:16.291829    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-proxy-8qwhq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:16.291845    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:16.488974    7018 request.go:622] Waited for 197.089772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:16.489010    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:16.489014    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:16.489021    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:16.489028    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:16.490668    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:16.490678    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:16.490684    7018 round_trippers.go:580]     Audit-Id: f7cf2cf1-fe75-45fb-b387-3c47e4ca38bf
	I0307 10:28:16.490689    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:16.490695    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:16.490699    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:16.490705    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:16.490710    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:16 GMT
	I0307 10:28:16.490783    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"469","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0307 10:28:16.688164    7018 request.go:622] Waited for 197.086665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:16.688201    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:16.688207    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:16.688216    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:16.688224    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:16.690320    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:16.690331    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:16.690337    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:16.690347    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:16.690354    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:16.690360    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:16 GMT
	I0307 10:28:16.690365    7018 round_trippers.go:580]     Audit-Id: fafa8c79-056c-4482-a7d3-9af678647000
	I0307 10:28:16.690370    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:16.690435    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b","resourceVersion":"812","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4512 chars]
	I0307 10:28:16.690610    7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:16.690616    7018 pod_ready.go:81] duration metric: took 398.761593ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:16.690622    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:16.888997    7018 request.go:622] Waited for 198.34143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:16.889083    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:16.889091    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:16.889099    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:16.889107    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:16.890960    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:16.890976    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:16.890988    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:16.890997    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:16.891006    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:16.891013    7018 round_trippers.go:580]     Audit-Id: 2a6b83fb-355a-47d1-a5fb-041011c34ce5
	I0307 10:28:16.891021    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:16.891029    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:16.891126    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0307 10:28:17.089042    7018 request.go:622] Waited for 197.667165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:17.089099    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:17.089104    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:17.089110    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:17.089123    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:17.092228    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:17.092240    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:17.092249    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:17.092256    7018 round_trippers.go:580]     Audit-Id: 4d8ae72e-fdde-4d59-9a71-91d0c3ee68a0
	I0307 10:28:17.092264    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:17.092271    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:17.092276    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:17.092282    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:17.092354    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1019","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4450 chars]
	I0307 10:28:17.092536    7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:17.092542    7018 pod_ready.go:81] duration metric: took 401.914192ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:17.092550    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:17.289090    7018 request.go:622] Waited for 196.506508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:17.289121    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:17.289126    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:17.289133    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:17.289140    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:17.290898    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:17.290909    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:17.290915    7018 round_trippers.go:580]     Audit-Id: 9fb63a2b-6315-4a56-8919-8e3ff05df64c
	I0307 10:28:17.290920    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:17.290926    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:17.290932    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:17.290936    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:17.290941    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:17.291122    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1035","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5133 chars]
	I0307 10:28:17.488710    7018 request.go:622] Waited for 197.357013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:17.488741    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:17.488773    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:17.488780    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:17.488786    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:17.492401    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:17.492411    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:17.492417    7018 round_trippers.go:580]     Audit-Id: 8a48812e-9efb-405d-92a7-d9eab408cfe7
	I0307 10:28:17.492429    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:17.492435    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:17.492439    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:17.492445    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:17.492449    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:17.492517    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:17.492711    7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-scheduler-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:17.492718    7018 pod_ready.go:81] duration metric: took 400.162814ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:17.492724    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-scheduler-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:17.492729    7018 pod_ready.go:38] duration metric: took 1.8409126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:17.492740    7018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 10:28:17.500400    7018 command_runner.go:130] > -16
	I0307 10:28:17.500574    7018 ops.go:34] apiserver oom_adj: -16
	I0307 10:28:17.500584    7018 kubeadm.go:637] restartCluster took 20.241085671s
	I0307 10:28:17.500589    7018 kubeadm.go:403] StartCluster complete in 20.26361982s
	I0307 10:28:17.500600    7018 settings.go:142] acquiring lock: {Name:mk4d055ee1d778ec2752c0ce26b6fb536462adb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:28:17.500678    7018 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:28:17.501023    7018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15985-3430/kubeconfig: {Name:mkea569ea3041d84fd3aeaa788f308c9891aa7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:28:17.501262    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 10:28:17.501294    7018 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0307 10:28:17.546290    7018 out.go:177] * Enabled addons: 
	I0307 10:28:17.501457    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:28:17.501669    7018 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:28:17.583590    7018 addons.go:499] enable addons completed in 82.276784ms: enabled=[]
	I0307 10:28:17.583795    7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:28:17.584004    7018 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 10:28:17.584011    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:17.584017    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:17.584022    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:17.585901    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:17.585911    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:17.585917    7018 round_trippers.go:580]     Audit-Id: 381c106f-61b9-4164-8d45-b690984d5352
	I0307 10:28:17.585927    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:17.585933    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:17.585937    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:17.585942    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:17.585947    7018 round_trippers.go:580]     Content-Length: 292
	I0307 10:28:17.585952    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:17.585965    7018 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9058bb7-5525-4245-a92a-3b0f0144c5d4","resourceVersion":"1033","creationTimestamp":"2023-03-07T18:18:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0307 10:28:17.586053    7018 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-260000" context rescaled to 1 replicas
	I0307 10:28:17.586069    7018 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:28:17.598551    7018 command_runner.go:130] > apiVersion: v1
	I0307 10:28:17.607409    7018 command_runner.go:130] > data:
	I0307 10:28:17.607416    7018 command_runner.go:130] >   Corefile: |
	I0307 10:28:17.607423    7018 command_runner.go:130] >     .:53 {
	I0307 10:28:17.607394    7018 out.go:177] * Verifying Kubernetes components...
	I0307 10:28:17.607432    7018 command_runner.go:130] >         log
	I0307 10:28:17.665368    7018 command_runner.go:130] >         errors
	I0307 10:28:17.665380    7018 command_runner.go:130] >         health {
	I0307 10:28:17.665387    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:28:17.665390    7018 command_runner.go:130] >            lameduck 5s
	I0307 10:28:17.665471    7018 command_runner.go:130] >         }
	I0307 10:28:17.665485    7018 command_runner.go:130] >         ready
	I0307 10:28:17.665501    7018 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0307 10:28:17.665515    7018 command_runner.go:130] >            pods insecure
	I0307 10:28:17.665530    7018 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0307 10:28:17.665540    7018 command_runner.go:130] >            ttl 30
	I0307 10:28:17.665547    7018 command_runner.go:130] >         }
	I0307 10:28:17.665555    7018 command_runner.go:130] >         prometheus :9153
	I0307 10:28:17.665561    7018 command_runner.go:130] >         hosts {
	I0307 10:28:17.665581    7018 command_runner.go:130] >            192.168.64.1 host.minikube.internal
	I0307 10:28:17.665589    7018 command_runner.go:130] >            fallthrough
	I0307 10:28:17.665596    7018 command_runner.go:130] >         }
	I0307 10:28:17.665604    7018 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0307 10:28:17.665613    7018 command_runner.go:130] >            max_concurrent 1000
	I0307 10:28:17.665622    7018 command_runner.go:130] >         }
	I0307 10:28:17.665633    7018 command_runner.go:130] >         cache 30
	I0307 10:28:17.665648    7018 command_runner.go:130] >         loop
	I0307 10:28:17.665659    7018 command_runner.go:130] >         reload
	I0307 10:28:17.665673    7018 command_runner.go:130] >         loadbalance
	I0307 10:28:17.665700    7018 command_runner.go:130] >     }
	I0307 10:28:17.665714    7018 command_runner.go:130] > kind: ConfigMap
	I0307 10:28:17.665724    7018 command_runner.go:130] > metadata:
	I0307 10:28:17.665738    7018 command_runner.go:130] >   creationTimestamp: "2023-03-07T18:18:28Z"
	I0307 10:28:17.665750    7018 command_runner.go:130] >   name: coredns
	I0307 10:28:17.665761    7018 command_runner.go:130] >   namespace: kube-system
	I0307 10:28:17.665769    7018 command_runner.go:130] >   resourceVersion: "361"
	I0307 10:28:17.665778    7018 command_runner.go:130] >   uid: ab4f9271-2ad1-469a-9991-ac0e7cd4eee1
	I0307 10:28:17.665875    7018 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0307 10:28:17.677281    7018 node_ready.go:35] waiting up to 6m0s for node "multinode-260000" to be "Ready" ...
	I0307 10:28:17.688141    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:17.688153    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:17.688160    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:17.688165    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:17.699560    7018 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0307 10:28:17.699573    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:17.699579    7018 round_trippers.go:580]     Audit-Id: b0a8d418-5306-402d-aafe-b01480d098d1
	I0307 10:28:17.699584    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:17.699588    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:17.699594    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:17.699602    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:17.699607    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:17.699666    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:18.201280    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:18.201301    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:18.201313    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:18.201324    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:18.205520    7018 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 10:28:18.205536    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:18.205545    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:18 GMT
	I0307 10:28:18.205551    7018 round_trippers.go:580]     Audit-Id: 93568139-27e9-412b-aabc-a063cf381701
	I0307 10:28:18.205556    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:18.205560    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:18.205566    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:18.205571    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:18.205679    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:18.700510    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:18.700532    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:18.700545    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:18.700556    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:18.703654    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:18.703670    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:18.703678    7018 round_trippers.go:580]     Audit-Id: fe05d8ff-851d-43ec-87d1-ea8137b7dbe8
	I0307 10:28:18.703684    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:18.703691    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:18.703714    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:18.703725    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:18.703732    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:18 GMT
	I0307 10:28:18.703813    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:19.202177    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:19.202200    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:19.202214    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:19.202227    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:19.205274    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:19.205290    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:19.205298    7018 round_trippers.go:580]     Audit-Id: 01e6aee3-dfa5-4ab3-b092-2707828ba795
	I0307 10:28:19.205331    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:19.205342    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:19.205349    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:19.205357    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:19.205364    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:19 GMT
	I0307 10:28:19.205470    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:19.700708    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:19.700729    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:19.700741    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:19.700751    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:19.703406    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:19.703422    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:19.703431    7018 round_trippers.go:580]     Audit-Id: 3a975007-4ad9-4952-af4f-5375799e6a1a
	I0307 10:28:19.703439    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:19.703445    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:19.703452    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:19.703458    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:19.703466    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:19 GMT
	I0307 10:28:19.703543    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:19.703788    7018 node_ready.go:58] node "multinode-260000" has status "Ready":"False"
	I0307 10:28:20.200489    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:20.200509    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:20.200521    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:20.200531    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:20.203162    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:20.203178    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:20.203186    7018 round_trippers.go:580]     Audit-Id: a8a0b987-0c00-4eb2-84cc-bb8ba63cb67a
	I0307 10:28:20.203193    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:20.203202    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:20.203212    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:20.203220    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:20.203228    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:20 GMT
	I0307 10:28:20.203489    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:20.700672    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:20.700696    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:20.700709    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:20.700725    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:20.703549    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:20.703565    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:20.703573    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:20.703580    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:20.703586    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:20.703593    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:20 GMT
	I0307 10:28:20.703599    7018 round_trippers.go:580]     Audit-Id: efe8aac9-6cb0-4496-83f5-15dd81197a83
	I0307 10:28:20.703607    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:20.703677    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:21.201352    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:21.201373    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:21.201385    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:21.201395    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:21.204173    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:21.204190    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:21.204197    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:21 GMT
	I0307 10:28:21.204205    7018 round_trippers.go:580]     Audit-Id: be92e2ce-4712-4f1e-861a-703e11d6cba4
	I0307 10:28:21.204220    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:21.204229    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:21.204235    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:21.204243    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:21.204341    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:21.700804    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:21.700827    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:21.700840    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:21.700851    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:21.703563    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:21.703580    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:21.703588    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:21.703595    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:21.703602    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:21 GMT
	I0307 10:28:21.703609    7018 round_trippers.go:580]     Audit-Id: d76a302b-b114-4fb6-a945-db5c79d73c04
	I0307 10:28:21.703616    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:21.703622    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:21.703693    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:21.703979    7018 node_ready.go:58] node "multinode-260000" has status "Ready":"False"
	I0307 10:28:22.200196    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:22.200216    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:22.200229    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:22.200239    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:22.202586    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:22.202599    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:22.202606    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:22.202614    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:22 GMT
	I0307 10:28:22.202622    7018 round_trippers.go:580]     Audit-Id: 4ff0cc55-c046-416f-9185-daae0bebce4a
	I0307 10:28:22.202632    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:22.202639    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:22.202696    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:22.202811    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:22.700709    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:22.700730    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:22.700742    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:22.700752    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:22.702936    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:22.723882    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:22.723896    7018 round_trippers.go:580]     Audit-Id: 29769d58-0043-4d39-82f0-cccd4df4015a
	I0307 10:28:22.723957    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:22.723969    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:22.723978    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:22.723988    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:22.723998    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:22 GMT
	I0307 10:28:22.724094    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:23.200620    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:23.200644    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:23.200657    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:23.200667    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:23.203465    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:23.203481    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:23.203489    7018 round_trippers.go:580]     Audit-Id: 9e76918b-04a7-460f-b7a3-1bb26e8c0971
	I0307 10:28:23.203496    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:23.203502    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:23.203510    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:23.203517    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:23.203523    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:23 GMT
	I0307 10:28:23.203617    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:23.700169    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:23.700191    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:23.700203    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:23.700213    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:23.703029    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:23.703045    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:23.703053    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:23.703059    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:23.703067    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:23.703076    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:23.703088    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:23 GMT
	I0307 10:28:23.703098    7018 round_trippers.go:580]     Audit-Id: ef8f12d5-7107-46fa-a902-ce29a6cd21c5
	I0307 10:28:23.703227    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:23.703480    7018 node_ready.go:49] node "multinode-260000" has status "Ready":"True"
	I0307 10:28:23.703494    7018 node_ready.go:38] duration metric: took 6.026171359s waiting for node "multinode-260000" to be "Ready" ...
	I0307 10:28:23.703502    7018 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:23.703549    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:23.703555    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:23.703563    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:23.703572    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:23.705759    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:23.705769    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:23.705780    7018 round_trippers.go:580]     Audit-Id: 67287338-b563-4ece-963d-6a23473c12f5
	I0307 10:28:23.705788    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:23.705795    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:23.705804    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:23.705811    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:23.705818    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:23 GMT
	I0307 10:28:23.706556    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1094"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83638 chars]
	I0307 10:28:23.708320    7018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:23.708353    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:23.708358    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:23.708374    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:23.708381    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:23.709654    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:23.709668    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:23.709674    7018 round_trippers.go:580]     Audit-Id: 31e97546-40fd-4948-9b6f-419bdad39a05
	I0307 10:28:23.709680    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:23.709685    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:23.709690    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:23.709696    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:23.709701    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:23 GMT
	I0307 10:28:23.709974    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:23.710200    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:23.710205    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:23.710212    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:23.710218    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:23.711266    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:23.711276    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:23.711284    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:23.711291    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:23.711299    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:23.711307    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:23.711316    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:23 GMT
	I0307 10:28:23.711324    7018 round_trippers.go:580]     Audit-Id: ef253b5e-8ae9-4c22-97b4-635ece1c07f1
	I0307 10:28:23.711443    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:24.211832    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:24.211854    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:24.211868    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:24.211879    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:24.214134    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:24.214147    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:24.214155    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:24.214161    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:24.214169    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:24 GMT
	I0307 10:28:24.214176    7018 round_trippers.go:580]     Audit-Id: 7cceac8c-72f2-43b3-a70c-da8298a351ea
	I0307 10:28:24.214183    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:24.214189    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:24.214267    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:24.214622    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:24.214631    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:24.214639    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:24.214647    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:24.216139    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:24.216148    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:24.216154    7018 round_trippers.go:580]     Audit-Id: 651af490-ed9e-4eba-a495-32b2210d00c4
	I0307 10:28:24.216159    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:24.216167    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:24.216176    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:24.216187    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:24.216193    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:24 GMT
	I0307 10:28:24.216294    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:24.712583    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:24.712604    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:24.712617    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:24.712627    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:24.715128    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:24.715141    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:24.715151    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:24.715174    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:24.715202    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:24.715215    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:24 GMT
	I0307 10:28:24.715229    7018 round_trippers.go:580]     Audit-Id: 64f8c7b5-e206-4888-b04e-57f95c098459
	I0307 10:28:24.715263    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:24.715362    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:24.715724    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:24.715733    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:24.715741    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:24.715748    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:24.717117    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:24.717131    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:24.717139    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:24.717149    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:24.717158    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:24 GMT
	I0307 10:28:24.717165    7018 round_trippers.go:580]     Audit-Id: 39facfb8-6882-4093-a54a-be9e41cdcd8a
	I0307 10:28:24.717189    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:24.717203    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:24.717297    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:25.211941    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:25.211961    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:25.211973    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:25.211984    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:25.214996    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:25.215012    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:25.215056    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:25.215076    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:25.215089    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:25.215121    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:25 GMT
	I0307 10:28:25.215133    7018 round_trippers.go:580]     Audit-Id: eab464a3-fd8c-4abd-92da-a9e3fab09b87
	I0307 10:28:25.215153    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:25.215232    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:25.215588    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:25.215596    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:25.215604    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:25.215611    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:25.216989    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:25.217000    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:25.217005    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:25.217010    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:25 GMT
	I0307 10:28:25.217021    7018 round_trippers.go:580]     Audit-Id: 1b48fc62-d0ae-42f1-a567-d263b0778b46
	I0307 10:28:25.217026    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:25.217031    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:25.217038    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:25.217228    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:25.713156    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:25.713175    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:25.713187    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:25.713197    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:25.715881    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:25.715901    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:25.715913    7018 round_trippers.go:580]     Audit-Id: b458a53f-cebf-4dba-b1b0-795a83b24bef
	I0307 10:28:25.715924    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:25.715933    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:25.715939    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:25.715946    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:25.715956    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:25 GMT
	I0307 10:28:25.716134    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:25.716499    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:25.716508    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:25.716516    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:25.716523    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:25.717669    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:25.717677    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:25.717683    7018 round_trippers.go:580]     Audit-Id: 1eb8ab80-758c-4e81-8dcb-159f98be89b6
	I0307 10:28:25.717691    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:25.717698    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:25.717705    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:25.717711    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:25.717717    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:25 GMT
	I0307 10:28:25.717847    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:25.718043    7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
	I0307 10:28:26.211810    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:26.211826    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:26.211833    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:26.211854    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:26.217580    7018 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 10:28:26.217593    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:26.217599    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:26.217624    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:26 GMT
	I0307 10:28:26.217634    7018 round_trippers.go:580]     Audit-Id: 25844fb6-cd84-4dd3-af18-9f89ee6d5a04
	I0307 10:28:26.217641    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:26.217646    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:26.217651    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:26.218222    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:26.218502    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:26.218509    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:26.218515    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:26.218520    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:26.223546    7018 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 10:28:26.223558    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:26.223563    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:26 GMT
	I0307 10:28:26.223568    7018 round_trippers.go:580]     Audit-Id: bf250b8a-6074-45b3-9f33-45ad42a6a343
	I0307 10:28:26.223573    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:26.223578    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:26.223582    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:26.223587    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:26.224042    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:26.713218    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:26.713243    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:26.713255    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:26.713265    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:26.716102    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:26.716121    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:26.716129    7018 round_trippers.go:580]     Audit-Id: 219d5f63-3a7c-44c7-8b51-2921f95c2710
	I0307 10:28:26.716136    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:26.716144    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:26.716151    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:26.716157    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:26.716165    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:26 GMT
	I0307 10:28:26.716247    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:26.716596    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:26.716604    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:26.716612    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:26.716619    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:26.718244    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:26.718252    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:26.718258    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:26.718264    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:26.718274    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:26.718280    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:26.718288    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:26 GMT
	I0307 10:28:26.718293    7018 round_trippers.go:580]     Audit-Id: ad769d45-1dbe-4f0f-bad4-953da8623939
	I0307 10:28:26.718441    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:27.212704    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:27.212727    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:27.212739    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:27.212749    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:27.215311    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:27.215337    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:27.215345    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:27.215353    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:27.215361    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:27 GMT
	I0307 10:28:27.215367    7018 round_trippers.go:580]     Audit-Id: 36856e4f-a7e1-45d6-97ce-8f885ac8c841
	I0307 10:28:27.215374    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:27.215381    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:27.215565    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:27.215939    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:27.215948    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:27.215956    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:27.215964    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:27.217347    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:27.217354    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:27.217362    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:27.217368    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:27 GMT
	I0307 10:28:27.217374    7018 round_trippers.go:580]     Audit-Id: d6676113-bd9a-4eaf-ba1b-019818744e42
	I0307 10:28:27.217381    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:27.217389    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:27.217404    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:27.217556    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:27.711824    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:27.724865    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:27.724880    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:27.724887    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:27.726579    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:27.726589    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:27.726594    7018 round_trippers.go:580]     Audit-Id: 0d01fa41-8246-4722-9399-93a5592f6b29
	I0307 10:28:27.726599    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:27.726606    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:27.726613    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:27.726619    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:27.726624    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:27 GMT
	I0307 10:28:27.726876    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:27.727175    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:27.727181    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:27.727187    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:27.727192    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:27.728314    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:27.728322    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:27.728334    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:27.728347    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:27.728353    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:27.728370    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:27 GMT
	I0307 10:28:27.728379    7018 round_trippers.go:580]     Audit-Id: 0e3e9ef9-ecac-45df-aee2-aff56bc03a97
	I0307 10:28:27.728391    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:27.728478    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:27.728664    7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
	I0307 10:28:28.212950    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:28.212969    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:28.212982    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:28.212992    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:28.216019    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:28.216035    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:28.216043    7018 round_trippers.go:580]     Audit-Id: 24e3382f-877e-4bd3-9d01-53648e905133
	I0307 10:28:28.216051    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:28.216057    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:28.216064    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:28.216072    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:28.216078    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:28 GMT
	I0307 10:28:28.216218    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:28.216592    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:28.216601    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:28.216610    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:28.216617    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:28.218098    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:28.218109    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:28.218116    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:28 GMT
	I0307 10:28:28.218121    7018 round_trippers.go:580]     Audit-Id: ba13bf42-a23e-4b8b-b82d-f134c64fb02d
	I0307 10:28:28.218133    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:28.218139    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:28.218144    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:28.218149    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:28.218380    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:28.713844    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:28.713872    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:28.713886    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:28.713897    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:28.717059    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:28.717075    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:28.717082    7018 round_trippers.go:580]     Audit-Id: 2d17ebc7-34f0-4220-a01c-eba9dc18629b
	I0307 10:28:28.717089    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:28.717096    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:28.717102    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:28.717109    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:28.717115    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:28 GMT
	I0307 10:28:28.717206    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:28.717584    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:28.717593    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:28.717601    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:28.717609    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:28.718961    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:28.718971    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:28.718978    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:28.718982    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:28.718987    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:28.718992    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:28 GMT
	I0307 10:28:28.718997    7018 round_trippers.go:580]     Audit-Id: 1a95c19b-155c-4919-8f52-e4a21e53e43d
	I0307 10:28:28.719002    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:28.719162    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:29.212285    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:29.212298    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:29.212305    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:29.212310    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:29.214049    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:29.214059    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:29.214065    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:29.214070    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:29.214075    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:29.214080    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:29 GMT
	I0307 10:28:29.214087    7018 round_trippers.go:580]     Audit-Id: 5902e368-f17f-4c82-9c7c-675d086888dd
	I0307 10:28:29.214092    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:29.214228    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:29.214511    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:29.214517    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:29.214523    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:29.214529    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:29.215699    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:29.215709    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:29.215716    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:29.215723    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:29 GMT
	I0307 10:28:29.215729    7018 round_trippers.go:580]     Audit-Id: b6d6f5f7-09c3-4195-a4c1-845aef7ffc32
	I0307 10:28:29.215734    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:29.215740    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:29.215747    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:29.215925    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:29.713052    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:29.713064    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:29.713070    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:29.713076    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:29.714443    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:29.714452    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:29.714457    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:29.714463    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:29.714468    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:29.714479    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:29 GMT
	I0307 10:28:29.714484    7018 round_trippers.go:580]     Audit-Id: 9c79de10-38b6-4cc5-8a5c-f518875339a0
	I0307 10:28:29.714489    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:29.714549    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:29.714827    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:29.714833    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:29.714839    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:29.714844    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:29.723979    7018 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0307 10:28:29.723993    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:29.724011    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:29.724019    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:29.724028    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:29.724034    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:29 GMT
	I0307 10:28:29.724040    7018 round_trippers.go:580]     Audit-Id: 23a3f013-edd3-4bde-b9dc-3cdee57361b7
	I0307 10:28:29.724046    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:29.724143    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.211801    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:30.211812    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.211819    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.211824    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.213958    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:30.213967    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.213972    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.213979    7018 round_trippers.go:580]     Audit-Id: e3914bca-23b4-48cb-b3f3-c3e31ebe9b8e
	I0307 10:28:30.213984    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.213989    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.213994    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.213999    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.219685    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:30.219986    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.219995    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.220004    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.220012    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.221717    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.221732    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.221741    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.221756    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.221762    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.221769    7018 round_trippers.go:580]     Audit-Id: f3b83e3d-bec0-444f-bd00-ec3be70f6d10
	I0307 10:28:30.221777    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.221783    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.221864    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.222060    7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
	I0307 10:28:30.712597    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:30.712622    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.712717    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.712731    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.716221    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:30.716239    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.716247    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.716256    7018 round_trippers.go:580]     Audit-Id: c7b16bdb-1c9a-42a3-b989-2ef728451887
	I0307 10:28:30.716263    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.716270    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.716278    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.716284    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.716375    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6489 chars]
	I0307 10:28:30.716777    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.716785    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.716793    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.716801    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.718436    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.718450    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.718457    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.718466    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.718473    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.718480    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.718485    7018 round_trippers.go:580]     Audit-Id: 405256c2-a3b7-4450-9419-3e5f6172aabd
	I0307 10:28:30.718491    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.718618    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.718803    7018 pod_ready.go:92] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:30.718812    7018 pod_ready.go:81] duration metric: took 7.010451765s waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.718825    7018 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.718853    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
	I0307 10:28:30.719043    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.719125    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.719139    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.721072    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.721084    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.721090    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.721095    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.721100    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.721105    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.721110    7018 round_trippers.go:580]     Audit-Id: ea8580ee-1e6e-4f3b-8474-356c1d7d09d5
	I0307 10:28:30.721114    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.721227    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"1080","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6056 chars]
	I0307 10:28:30.721443    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.721450    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.721456    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.721461    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.722677    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.722687    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.722699    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.722710    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.722719    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.722725    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.722731    7018 round_trippers.go:580]     Audit-Id: 9a6b5445-3298-4c53-9f39-0cfd9f3d0951
	I0307 10:28:30.722738    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.722826    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.723009    7018 pod_ready.go:92] pod "etcd-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:30.723015    7018 pod_ready.go:81] duration metric: took 4.185851ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.723025    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.723049    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
	I0307 10:28:30.723053    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.723059    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.723068    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.725808    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:30.725819    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.725824    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.725830    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.725835    7018 round_trippers.go:580]     Audit-Id: 27751b68-dbeb-4139-b048-aa37ba96ce0d
	I0307 10:28:30.725840    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.725844    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.725850    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.725930    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"1143","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7591 chars]
	I0307 10:28:30.726162    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.726168    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.726173    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.726179    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.727114    7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 10:28:30.727123    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.727129    7018 round_trippers.go:580]     Audit-Id: 09ac9355-1c65-4420-8f52-155883618aa6
	I0307 10:28:30.727134    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.727140    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.727145    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.727150    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.727155    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.727288    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.727470    7018 pod_ready.go:92] pod "kube-apiserver-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:30.727476    7018 pod_ready.go:81] duration metric: took 4.446202ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.727481    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.727505    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
	I0307 10:28:30.727510    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.727516    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.727522    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.728648    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.728659    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.728665    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.728670    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.728674    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.728679    7018 round_trippers.go:580]     Audit-Id: 559a8b88-70d9-4098-a5fd-ce69e6fc06be
	I0307 10:28:30.728684    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.728688    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.728916    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1131","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7159 chars]
	I0307 10:28:30.729139    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.729145    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.729151    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.729157    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.730563    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.730570    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.730575    7018 round_trippers.go:580]     Audit-Id: 8efa58ee-7b42-4ba5-a878-ad10e7d3e33b
	I0307 10:28:30.730579    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.730584    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.730588    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.730593    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.730599    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.730701    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.730866    7018 pod_ready.go:92] pod "kube-controller-manager-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:30.730872    7018 pod_ready.go:81] duration metric: took 3.385852ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.730877    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.730902    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
	I0307 10:28:30.730906    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.730912    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.730918    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.731885    7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 10:28:30.731894    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.731900    7018 round_trippers.go:580]     Audit-Id: ffc44502-d870-437e-9544-bf450ca2b814
	I0307 10:28:30.731906    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.731914    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.731920    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.731925    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.731930    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.732036    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"1061","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0307 10:28:30.732243    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.732248    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.732255    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.732260    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.733218    7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 10:28:30.733226    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.733232    7018 round_trippers.go:580]     Audit-Id: 3937160f-ce1c-4927-8fe0-6e7893d1567c
	I0307 10:28:30.733237    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.733244    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.733248    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.733253    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.733258    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.733356    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.733519    7018 pod_ready.go:92] pod "kube-proxy-8qwhq" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:30.733525    7018 pod_ready.go:81] duration metric: took 2.642988ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.733531    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.912636    7018 request.go:622] Waited for 179.066998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:30.912685    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:30.912694    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.912778    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.912791    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.915495    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:30.915507    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.915515    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.915522    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.915530    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.915536    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.915544    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:31 GMT
	I0307 10:28:30.915550    7018 round_trippers.go:580]     Audit-Id: 3ae79f8d-1535-4d8e-a180-5f18227960da
	I0307 10:28:30.915655    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"469","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0307 10:28:31.114599    7018 request.go:622] Waited for 198.634122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:31.114628    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:31.114633    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.114642    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.114649    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.116473    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:31.116483    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.116488    7018 round_trippers.go:580]     Audit-Id: e955a99c-57ac-4ae0-a513-9afa809a5caf
	I0307 10:28:31.116493    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.116498    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.116503    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.116509    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.116513    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:31 GMT
	I0307 10:28:31.116688    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b","resourceVersion":"812","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4512 chars]
	I0307 10:28:31.116864    7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:31.116870    7018 pod_ready.go:81] duration metric: took 383.333062ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:31.116876    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:31.314683    7018 request.go:622] Waited for 197.728848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:31.314736    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:31.314770    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.314788    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.314803    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.317976    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:31.317992    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.318000    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:31 GMT
	I0307 10:28:31.318029    7018 round_trippers.go:580]     Audit-Id: a357c92b-2320-4582-b9e7-f62d05a9d4e3
	I0307 10:28:31.318042    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.318051    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.318057    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.318064    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.318199    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0307 10:28:31.514054    7018 request.go:622] Waited for 195.505176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:31.514146    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:31.514242    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.514254    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.514267    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.517133    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:31.517148    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.517156    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.517163    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.517171    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.517178    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.517184    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:31 GMT
	I0307 10:28:31.517191    7018 round_trippers.go:580]     Audit-Id: 532579cf-d5cc-41c0-b38e-54a2f800d22f
	I0307 10:28:31.517302    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1109","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4330 chars]
	I0307 10:28:31.517527    7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:31.517534    7018 pod_ready.go:81] duration metric: took 400.651378ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:31.517542    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:31.713858    7018 request.go:622] Waited for 196.240525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:31.713912    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:31.713952    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.713969    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.713983    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.716855    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:31.716871    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.716879    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.716894    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:31 GMT
	I0307 10:28:31.716902    7018 round_trippers.go:580]     Audit-Id: 291b5d9b-3357-4be3-9d0c-89832cae8ad3
	I0307 10:28:31.716910    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.716917    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.716924    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.717008    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1139","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4889 chars]
	I0307 10:28:31.912715    7018 request.go:622] Waited for 195.420936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:31.912766    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:31.912775    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.912789    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.912852    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.915496    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:31.915515    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.915523    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:31.915532    7018 round_trippers.go:580]     Audit-Id: ab49a22e-b0ca-4460-8af6-f31980cc83e0
	I0307 10:28:31.915539    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.915547    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.915558    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.915565    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.915671    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:31.915930    7018 pod_ready.go:92] pod "kube-scheduler-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:31.915938    7018 pod_ready.go:81] duration metric: took 398.388063ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:31.915946    7018 pod_ready.go:38] duration metric: took 8.212399171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:31.915959    7018 api_server.go:51] waiting for apiserver process to appear ...
	I0307 10:28:31.916021    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:31.926000    7018 command_runner.go:130] > 1604
	I0307 10:28:31.926101    7018 api_server.go:71] duration metric: took 14.339953362s to wait for apiserver process to appear ...
	I0307 10:28:31.926109    7018 api_server.go:87] waiting for apiserver healthz status ...
	I0307 10:28:31.926115    7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:28:31.929766    7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0307 10:28:31.929791    7018 round_trippers.go:463] GET https://192.168.64.12:8443/version
	I0307 10:28:31.929796    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.929803    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.929809    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.930265    7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 10:28:31.930272    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.930277    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.930283    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.930291    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.930297    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.930302    7018 round_trippers.go:580]     Content-Length: 263
	I0307 10:28:31.930307    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:31.930313    7018 round_trippers.go:580]     Audit-Id: 416b7f0f-553f-48b8-8633-6be8897b3ddf
	I0307 10:28:31.930330    7018 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.2",
	  "gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
	  "gitTreeState": "clean",
	  "buildDate": "2023-02-22T13:32:22Z",
	  "goVersion": "go1.19.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0307 10:28:31.930354    7018 api_server.go:140] control plane version: v1.26.2
	I0307 10:28:31.930360    7018 api_server.go:130] duration metric: took 4.24718ms to wait for apiserver health ...
	I0307 10:28:31.930364    7018 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 10:28:32.112716    7018 request.go:622] Waited for 182.311615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:32.112771    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:32.112780    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:32.112834    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:32.112848    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:32.116811    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:32.116841    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:32.116877    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:32.116904    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:32.116916    7018 round_trippers.go:580]     Audit-Id: c5d1857d-a22f-42d9-aec9-08ad8e7331bd
	I0307 10:28:32.116950    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:32.116966    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:32.116973    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:32.118187    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82836 chars]
	I0307 10:28:32.119945    7018 system_pods.go:59] 12 kube-system pods found
	I0307 10:28:32.119954    7018 system_pods.go:61] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
	I0307 10:28:32.119958    7018 system_pods.go:61] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
	I0307 10:28:32.119963    7018 system_pods.go:61] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
	I0307 10:28:32.119967    7018 system_pods.go:61] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
	I0307 10:28:32.119970    7018 system_pods.go:61] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
	I0307 10:28:32.119975    7018 system_pods.go:61] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
	I0307 10:28:32.119993    7018 system_pods.go:61] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running
	I0307 10:28:32.120000    7018 system_pods.go:61] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
	I0307 10:28:32.120004    7018 system_pods.go:61] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
	I0307 10:28:32.120008    7018 system_pods.go:61] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
	I0307 10:28:32.120011    7018 system_pods.go:61] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
	I0307 10:28:32.120016    7018 system_pods.go:61] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
	I0307 10:28:32.120019    7018 system_pods.go:74] duration metric: took 189.651129ms to wait for pod list to return data ...
	I0307 10:28:32.120025    7018 default_sa.go:34] waiting for default service account to be created ...
	I0307 10:28:32.313205    7018 request.go:622] Waited for 193.131438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
	I0307 10:28:32.313251    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
	I0307 10:28:32.313259    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:32.313271    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:32.313281    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:32.315756    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:32.315778    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:32.315809    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:32.315822    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:32.315830    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:32.315837    7018 round_trippers.go:580]     Content-Length: 262
	I0307 10:28:32.315843    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:32.315850    7018 round_trippers.go:580]     Audit-Id: ac7a8c42-5ffa-402f-970f-d1d5a6d3058d
	I0307 10:28:32.315857    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:32.315874    7018 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6e32b5cd-63bd-46a7-9ed5-ea842da6729c","resourceVersion":"325","creationTimestamp":"2023-03-07T18:18:42Z"}}]}
	I0307 10:28:32.316001    7018 default_sa.go:45] found service account: "default"
	I0307 10:28:32.316010    7018 default_sa.go:55] duration metric: took 195.9795ms for default service account to be created ...
	I0307 10:28:32.316018    7018 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 10:28:32.513632    7018 request.go:622] Waited for 197.482521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:32.513683    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:32.513691    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:32.513704    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:32.513718    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:32.517123    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:32.517133    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:32.517139    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:32.517144    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:32.517148    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:32.517154    7018 round_trippers.go:580]     Audit-Id: c5f53d8f-ee73-49a6-be78-6ca8c2200a8e
	I0307 10:28:32.517161    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:32.517168    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:32.517894    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82836 chars]
	I0307 10:28:32.519632    7018 system_pods.go:86] 12 kube-system pods found
	I0307 10:28:32.519641    7018 system_pods.go:89] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
	I0307 10:28:32.519650    7018 system_pods.go:89] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
	I0307 10:28:32.519654    7018 system_pods.go:89] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
	I0307 10:28:32.519659    7018 system_pods.go:89] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
	I0307 10:28:32.519664    7018 system_pods.go:89] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
	I0307 10:28:32.519668    7018 system_pods.go:89] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
	I0307 10:28:32.519671    7018 system_pods.go:89] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running
	I0307 10:28:32.519675    7018 system_pods.go:89] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
	I0307 10:28:32.519679    7018 system_pods.go:89] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
	I0307 10:28:32.519683    7018 system_pods.go:89] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
	I0307 10:28:32.519686    7018 system_pods.go:89] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
	I0307 10:28:32.519690    7018 system_pods.go:89] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
	I0307 10:28:32.519694    7018 system_pods.go:126] duration metric: took 203.671188ms to wait for k8s-apps to be running ...
	I0307 10:28:32.519699    7018 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 10:28:32.519751    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:28:32.528776    7018 system_svc.go:56] duration metric: took 9.073723ms WaitForService to wait for kubelet.
	I0307 10:28:32.528791    7018 kubeadm.go:578] duration metric: took 14.942639871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0307 10:28:32.528801    7018 node_conditions.go:102] verifying NodePressure condition ...
	I0307 10:28:32.714684    7018 request.go:622] Waited for 185.826429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes
	I0307 10:28:32.725835    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
	I0307 10:28:32.725851    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:32.725863    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:32.725878    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:32.728446    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:32.728460    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:32.728468    7018 round_trippers.go:580]     Audit-Id: baedd684-4a38-47c3-8b1a-5bac961a5fbc
	I0307 10:28:32.728477    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:32.728490    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:32.728500    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:32.728507    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:32.728514    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:32.728762    7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16210 chars]
	I0307 10:28:32.729257    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:32.729266    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:32.729274    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:32.729278    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:32.729282    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:32.729286    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:32.729289    7018 node_conditions.go:105] duration metric: took 200.482518ms to run NodePressure ...
	I0307 10:28:32.729297    7018 start.go:228] waiting for startup goroutines ...
	I0307 10:28:32.729302    7018 start.go:233] waiting for cluster config update ...
	I0307 10:28:32.729308    7018 start.go:242] writing updated cluster config ...
	I0307 10:28:32.729786    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:28:32.729851    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:28:32.751369    7018 out.go:177] * Starting worker node multinode-260000-m02 in cluster multinode-260000
	I0307 10:28:32.794328    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:28:32.794413    7018 cache.go:57] Caching tarball of preloaded images
	I0307 10:28:32.794583    7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 10:28:32.794601    7018 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 10:28:32.794723    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:28:32.795675    7018 cache.go:193] Successfully downloaded all kic artifacts
	I0307 10:28:32.795702    7018 start.go:364] acquiring machines lock for multinode-260000-m02: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:28:32.795787    7018 start.go:368] acquired machines lock for "multinode-260000-m02" in 65.198µs
	I0307 10:28:32.795817    7018 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:28:32.795824    7018 fix.go:55] fixHost starting: m02
	I0307 10:28:32.796234    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:28:32.796271    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:28:32.804078    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51665
	I0307 10:28:32.804430    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:28:32.804833    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:28:32.804855    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:28:32.805065    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:28:32.805179    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:32.805269    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetState
	I0307 10:28:32.805361    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:28:32.805423    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid from json: 6295
	I0307 10:28:32.806220    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid 6295 missing from process table
	I0307 10:28:32.806256    7018 fix.go:103] recreateIfNeeded on multinode-260000-m02: state=Stopped err=<nil>
	I0307 10:28:32.806268    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	W0307 10:28:32.806350    7018 fix.go:129] unexpected machine state, will restart: <nil>
	I0307 10:28:32.827377    7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000-m02" ...
	I0307 10:28:32.869734    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .Start
	I0307 10:28:32.869997    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:28:32.870091    7018 main.go:141] libmachine: (multinode-260000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid
	I0307 10:28:32.871656    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid 6295 missing from process table
	I0307 10:28:32.871680    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | pid 6295 is in state "Stopped"
	I0307 10:28:32.871712    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid...
	I0307 10:28:32.871965    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Using UUID 835471be-bd14-11ed-9c3c-149d997fca88
	I0307 10:28:32.899206    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Generated MAC ba:65:3c:6f:8d:dc
	I0307 10:28:32.899232    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
	I0307 10:28:32.899404    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"835471be-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000395b00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0307 10:28:32.899444    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"835471be-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000395b00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0307 10:28:32.899480    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "835471be-bd14-11ed-9c3c-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/multinode-260000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage,/Users/j
enkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
	I0307 10:28:32.899519    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 835471be-bd14-11ed-9c3c-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/multinode-260000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/mult
inode-260000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
	I0307 10:28:32.899533    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0307 10:28:32.900716    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Pid is 7098
	I0307 10:28:32.901058    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Attempt 0
	I0307 10:28:32.901070    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:28:32.901159    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid from json: 7098
	I0307 10:28:32.902759    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Searching for ba:65:3c:6f:8d:dc in /var/db/dhcpd_leases ...
	I0307 10:28:32.902821    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0307 10:28:32.902837    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d38e}
	I0307 10:28:32.902848    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
	I0307 10:28:32.902856    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
	I0307 10:28:32.902881    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d194}
	I0307 10:28:32.902892    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Found match: ba:65:3c:6f:8d:dc
	I0307 10:28:32.902900    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | IP: 192.168.64.13
	I0307 10:28:32.902925    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetConfigRaw
	I0307 10:28:32.903499    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
	I0307 10:28:32.903686    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:28:32.904005    7018 machine.go:88] provisioning docker machine ...
	I0307 10:28:32.904016    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:32.904127    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
	I0307 10:28:32.904238    7018 buildroot.go:166] provisioning hostname "multinode-260000-m02"
	I0307 10:28:32.904248    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
	I0307 10:28:32.904335    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:32.904423    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:32.904506    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:32.904579    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:32.904654    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:32.904766    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:32.905083    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:32.905099    7018 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-260000-m02 && echo "multinode-260000-m02" | sudo tee /etc/hostname
	I0307 10:28:32.907073    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0307 10:28:32.914845    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0307 10:28:32.915562    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:28:32.915575    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:28:32.915583    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:28:32.915590    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:28:33.270333    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0307 10:28:33.270350    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0307 10:28:33.374324    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:28:33.374345    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:28:33.374362    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:28:33.374375    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:28:33.375209    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0307 10:28:33.375231    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0307 10:28:37.885819    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0307 10:28:37.885892    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0307 10:28:37.885906    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0307 10:28:43.994445    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000-m02
	
	I0307 10:28:43.994460    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:43.994617    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:43.994725    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:43.994819    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:43.994903    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:43.995031    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:43.995375    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:43.995387    7018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-260000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-260000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:28:44.074363    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:28:44.074384    7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
	I0307 10:28:44.074392    7018 buildroot.go:174] setting up certificates
	I0307 10:28:44.074399    7018 provision.go:83] configureAuth start
	I0307 10:28:44.074407    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
	I0307 10:28:44.074531    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
	I0307 10:28:44.074611    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:44.074689    7018 provision.go:138] copyHostCerts
	I0307 10:28:44.074731    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:28:44.074787    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
	I0307 10:28:44.074794    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:28:44.074898    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
	I0307 10:28:44.075070    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:28:44.075104    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
	I0307 10:28:44.075109    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:28:44.075176    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
	I0307 10:28:44.075308    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:28:44.075341    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
	I0307 10:28:44.075345    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:28:44.075412    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
	I0307 10:28:44.075534    7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000-m02 san=[192.168.64.13 192.168.64.13 localhost 127.0.0.1 minikube multinode-260000-m02]
	I0307 10:28:44.229773    7018 provision.go:172] copyRemoteCerts
	I0307 10:28:44.229826    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:28:44.229842    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:44.229985    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:44.230082    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.230172    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:44.230271    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
	I0307 10:28:44.272044    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0307 10:28:44.272115    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 10:28:44.288148    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0307 10:28:44.288225    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0307 10:28:44.303969    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0307 10:28:44.304037    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 10:28:44.319850    7018 provision.go:86] duration metric: configureAuth took 245.441923ms
	I0307 10:28:44.319862    7018 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:28:44.320030    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:28:44.320045    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:44.320174    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:44.320276    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:44.320360    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.320463    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.320545    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:44.320659    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:44.320957    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:44.320966    7018 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:28:44.395776    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:28:44.395788    7018 buildroot.go:70] root file system type: tmpfs
	I0307 10:28:44.395864    7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:28:44.395879    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:44.396009    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:44.396095    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.396175    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.396263    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:44.396386    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:44.396702    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:44.396747    7018 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.64.12"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:28:44.478924    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.64.12
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:28:44.478942    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:44.479070    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:44.479153    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.479233    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.479316    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:44.479441    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:44.479748    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:44.479760    7018 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:28:45.040521    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 10:28:45.040534    7018 machine.go:91] provisioned docker machine in 12.136465556s
	I0307 10:28:45.040540    7018 start.go:300] post-start starting for "multinode-260000-m02" (driver="hyperkit")
	I0307 10:28:45.040546    7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:28:45.040555    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:45.040748    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:28:45.040760    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:45.040882    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:45.040972    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:45.041059    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:45.041157    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
	I0307 10:28:45.087397    7018 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:28:45.091149    7018 command_runner.go:130] > NAME=Buildroot
	I0307 10:28:45.091158    7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
	I0307 10:28:45.091162    7018 command_runner.go:130] > ID=buildroot
	I0307 10:28:45.091166    7018 command_runner.go:130] > VERSION_ID=2021.02.12
	I0307 10:28:45.091170    7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0307 10:28:45.091259    7018 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 10:28:45.091268    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
	I0307 10:28:45.091351    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
	I0307 10:28:45.091498    7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
	I0307 10:28:45.091504    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
	I0307 10:28:45.091663    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:28:45.100582    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:28:45.126802    7018 start.go:303] post-start completed in 86.252226ms
	I0307 10:28:45.126814    7018 fix.go:57] fixHost completed within 12.330934005s
	I0307 10:28:45.126826    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:45.126964    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:45.127056    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:45.127154    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:45.127232    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:45.127364    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:45.127672    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:45.127680    7018 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 10:28:45.202858    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213725.334485743
	
	I0307 10:28:45.202870    7018 fix.go:207] guest clock: 1678213725.334485743
	I0307 10:28:45.202880    7018 fix.go:220] Guest: 2023-03-07 10:28:45.334485743 -0800 PST Remote: 2023-03-07 10:28:45.126816 -0800 PST m=+87.461319305 (delta=207.669743ms)
	I0307 10:28:45.202890    7018 fix.go:191] guest clock delta is within tolerance: 207.669743ms
	I0307 10:28:45.202894    7018 start.go:83] releasing machines lock for "multinode-260000-m02", held for 12.407039272s
	I0307 10:28:45.202911    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:45.203045    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
	I0307 10:28:45.229173    7018 out.go:177] * Found network options:
	I0307 10:28:45.249904    7018 out.go:177]   - NO_PROXY=192.168.64.12
	W0307 10:28:45.271748    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 10:28:45.271793    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:45.272543    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:45.272757    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:45.272892    7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 10:28:45.272940    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	W0307 10:28:45.273042    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 10:28:45.273135    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:45.273147    7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 10:28:45.273165    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:45.273342    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:45.273376    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:45.273607    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:45.273659    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:45.273827    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:45.273861    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
	I0307 10:28:45.274044    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
	I0307 10:28:45.313860    7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0307 10:28:45.314024    7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:28:45.314083    7018 ssh_runner.go:195] Run: which cri-dockerd
	I0307 10:28:45.353726    7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 10:28:45.354872    7018 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 10:28:45.355027    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 10:28:45.362451    7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 10:28:45.373398    7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 10:28:45.384177    7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0307 10:28:45.384307    7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 10:28:45.384316    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:28:45.384403    7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:28:45.401772    7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
	I0307 10:28:45.401790    7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 10:28:45.401795    7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 10:28:45.401801    7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 10:28:45.401805    7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 10:28:45.401809    7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 10:28:45.401813    7018 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 10:28:45.401818    7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 10:28:45.401823    7018 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0307 10:28:45.401828    7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:28:45.401832    7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0307 10:28:45.402825    7018 docker.go:630] Got preloaded images: -- stdout --
	kindest/kindnetd:v20230227-15197099
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0307 10:28:45.402834    7018 docker.go:560] Images already preloaded, skipping extraction
	I0307 10:28:45.402840    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:28:45.402914    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:28:45.415287    7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:28:45.415302    7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:28:45.415537    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 10:28:45.422829    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:28:45.429702    7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:28:45.429750    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:28:45.436708    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:28:45.443666    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:28:45.450827    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:28:45.457881    7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:28:45.464910    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:28:45.471731    7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:28:45.477787    7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 10:28:45.477987    7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:28:45.484272    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:28:45.566893    7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:28:45.578247    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:28:45.578332    7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:28:45.587719    7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0307 10:28:45.588048    7018 command_runner.go:130] > [Unit]
	I0307 10:28:45.588056    7018 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 10:28:45.588070    7018 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 10:28:45.588078    7018 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0307 10:28:45.588085    7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0307 10:28:45.588091    7018 command_runner.go:130] > StartLimitBurst=3
	I0307 10:28:45.588111    7018 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 10:28:45.588119    7018 command_runner.go:130] > [Service]
	I0307 10:28:45.588126    7018 command_runner.go:130] > Type=notify
	I0307 10:28:45.588130    7018 command_runner.go:130] > Restart=on-failure
	I0307 10:28:45.588134    7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12
	I0307 10:28:45.588141    7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 10:28:45.588148    7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 10:28:45.588153    7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 10:28:45.588159    7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 10:28:45.588164    7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 10:28:45.588170    7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 10:28:45.588176    7018 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 10:28:45.588189    7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 10:28:45.588195    7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 10:28:45.588199    7018 command_runner.go:130] > ExecStart=
	I0307 10:28:45.588218    7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0307 10:28:45.588223    7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 10:28:45.588228    7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 10:28:45.588234    7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 10:28:45.588238    7018 command_runner.go:130] > LimitNOFILE=infinity
	I0307 10:28:45.588247    7018 command_runner.go:130] > LimitNPROC=infinity
	I0307 10:28:45.588253    7018 command_runner.go:130] > LimitCORE=infinity
	I0307 10:28:45.588259    7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 10:28:45.588263    7018 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 10:28:45.588267    7018 command_runner.go:130] > TasksMax=infinity
	I0307 10:28:45.588270    7018 command_runner.go:130] > TimeoutStartSec=0
	I0307 10:28:45.588276    7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 10:28:45.588279    7018 command_runner.go:130] > Delegate=yes
	I0307 10:28:45.588284    7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 10:28:45.588294    7018 command_runner.go:130] > KillMode=process
	I0307 10:28:45.588298    7018 command_runner.go:130] > [Install]
	I0307 10:28:45.588302    7018 command_runner.go:130] > WantedBy=multi-user.target
	I0307 10:28:45.588380    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:28:45.599940    7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:28:45.612861    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:28:45.622327    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:28:45.630580    7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 10:28:45.653722    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:28:45.662024    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:28:45.674917    7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:28:45.674931    7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:28:45.674988    7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:28:45.756263    7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:28:45.846497    7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:28:45.846514    7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 10:28:45.858511    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:28:45.944748    7018 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:28:47.255144    7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.310371403s)
	I0307 10:28:47.255214    7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:28:47.335677    7018 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 10:28:47.417454    7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:28:47.513228    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:28:47.598471    7018 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 10:28:47.611967    7018 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 10:28:47.612060    7018 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 10:28:47.616814    7018 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0307 10:28:47.616826    7018 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0307 10:28:47.616831    7018 command_runner.go:130] > Device: 16h/22d	Inode: 852         Links: 1
	I0307 10:28:47.616837    7018 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0307 10:28:47.616851    7018 command_runner.go:130] > Access: 2023-03-07 18:28:47.742167434 +0000
	I0307 10:28:47.616856    7018 command_runner.go:130] > Modify: 2023-03-07 18:28:47.742167434 +0000
	I0307 10:28:47.616860    7018 command_runner.go:130] > Change: 2023-03-07 18:28:47.744167434 +0000
	I0307 10:28:47.616865    7018 command_runner.go:130] >  Birth: -
	I0307 10:28:47.617043    7018 start.go:553] Will wait 60s for crictl version
	I0307 10:28:47.617089    7018 ssh_runner.go:195] Run: which crictl
	I0307 10:28:47.619815    7018 command_runner.go:130] > /usr/bin/crictl
	I0307 10:28:47.619873    7018 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 10:28:47.691285    7018 command_runner.go:130] > Version:  0.1.0
	I0307 10:28:47.691297    7018 command_runner.go:130] > RuntimeName:  docker
	I0307 10:28:47.691301    7018 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0307 10:28:47.691305    7018 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0307 10:28:47.692228    7018 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0307 10:28:47.692301    7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:28:47.711035    7018 command_runner.go:130] > 20.10.23
	I0307 10:28:47.728475    7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:28:47.749259    7018 command_runner.go:130] > 20.10.23
	I0307 10:28:47.770120    7018 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
	I0307 10:28:47.813210    7018 out.go:177]   - env NO_PROXY=192.168.64.12
	I0307 10:28:47.835385    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
	I0307 10:28:47.835775    7018 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0307 10:28:47.840292    7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:28:47.848646    7018 certs.go:56] Setting up /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000 for IP: 192.168.64.13
	I0307 10:28:47.848666    7018 certs.go:186] acquiring lock for shared ca certs: {Name:mk21aa92235e3b083ba3cf4a52527e5734aca22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:28:47.848814    7018 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key
	I0307 10:28:47.848878    7018 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key
	I0307 10:28:47.848891    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 10:28:47.848915    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0307 10:28:47.848940    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 10:28:47.848960    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 10:28:47.849045    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem (1338 bytes)
	W0307 10:28:47.849088    7018 certs.go:397] ignoring /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903_empty.pem, impossibly tiny 0 bytes
	I0307 10:28:47.849100    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 10:28:47.849141    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem (1082 bytes)
	I0307 10:28:47.849185    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem (1123 bytes)
	I0307 10:28:47.849224    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem (1675 bytes)
	I0307 10:28:47.849299    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:28:47.849342    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:28:47.849367    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem -> /usr/share/ca-certificates/3903.pem
	I0307 10:28:47.849386    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /usr/share/ca-certificates/39032.pem
	I0307 10:28:47.849662    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 10:28:47.865455    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 10:28:47.881052    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 10:28:47.896926    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 10:28:47.912741    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 10:28:47.928528    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem --> /usr/share/ca-certificates/3903.pem (1338 bytes)
	I0307 10:28:47.945013    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /usr/share/ca-certificates/39032.pem (1708 bytes)
	I0307 10:28:47.960635    7018 ssh_runner.go:195] Run: openssl version
	I0307 10:28:47.964021    7018 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0307 10:28:47.964272    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 10:28:47.971316    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:28:47.974134    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:28:47.974290    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:28:47.974333    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:28:47.977654    7018 command_runner.go:130] > b5213941
	I0307 10:28:47.977920    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 10:28:47.984887    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3903.pem && ln -fs /usr/share/ca-certificates/3903.pem /etc/ssl/certs/3903.pem"
	I0307 10:28:47.992249    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3903.pem
	I0307 10:28:47.995266    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 18:06 /usr/share/ca-certificates/3903.pem
	I0307 10:28:47.995458    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:06 /usr/share/ca-certificates/3903.pem
	I0307 10:28:47.995499    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3903.pem
	I0307 10:28:47.998865    7018 command_runner.go:130] > 51391683
	I0307 10:28:47.999120    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3903.pem /etc/ssl/certs/51391683.0"
	I0307 10:28:48.006141    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/39032.pem && ln -fs /usr/share/ca-certificates/39032.pem /etc/ssl/certs/39032.pem"
	I0307 10:28:48.013240    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/39032.pem
	I0307 10:28:48.016074    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 18:06 /usr/share/ca-certificates/39032.pem
	I0307 10:28:48.016260    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:06 /usr/share/ca-certificates/39032.pem
	I0307 10:28:48.016294    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/39032.pem
	I0307 10:28:48.019631    7018 command_runner.go:130] > 3ec20f2e
	I0307 10:28:48.019880    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/39032.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 10:28:48.026902    7018 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 10:28:48.048324    7018 command_runner.go:130] > cgroupfs
	I0307 10:28:48.048980    7018 cni.go:84] Creating CNI manager for ""
	I0307 10:28:48.048990    7018 cni.go:136] 3 nodes found, recommending kindnet
	I0307 10:28:48.048997    7018 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0307 10:28:48.049008    7018 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.13 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-260000 NodeName:multinode-260000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0307 10:28:48.049099    7018 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-260000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 10:28:48.049134    7018 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-260000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0307 10:28:48.049192    7018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0307 10:28:48.055441    7018 command_runner.go:130] > kubeadm
	I0307 10:28:48.055448    7018 command_runner.go:130] > kubectl
	I0307 10:28:48.055454    7018 command_runner.go:130] > kubelet
	I0307 10:28:48.055533    7018 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 10:28:48.055575    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0307 10:28:48.061804    7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (453 bytes)
	I0307 10:28:48.072809    7018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 10:28:48.083885    7018 ssh_runner.go:195] Run: grep 192.168.64.12	control-plane.minikube.internal$ /etc/hosts
	I0307 10:28:48.086255    7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:28:48.093971    7018 host.go:66] Checking if "multinode-260000" exists ...
	I0307 10:28:48.094151    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:28:48.094253    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:28:48.094274    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:28:48.101209    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51684
	I0307 10:28:48.101550    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:28:48.101900    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:28:48.101916    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:28:48.102150    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:28:48.102258    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:28:48.102341    7018 start.go:301] JoinCluster: &{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP:}
	I0307 10:28:48.102433    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0307 10:28:48.102443    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:28:48.102521    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:28:48.102622    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:28:48.102707    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:28:48.102782    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:28:48.189788    7018 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af 
	I0307 10:28:48.189814    7018 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 10:28:48.189833    7018 host.go:66] Checking if "multinode-260000" exists ...
	I0307 10:28:48.190161    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:28:48.190186    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:28:48.196916    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51687
	I0307 10:28:48.197249    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:28:48.197612    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:28:48.197624    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:28:48.197818    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:28:48.197901    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:28:48.198033    7018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl drain multinode-260000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0307 10:28:48.198050    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:28:48.198133    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:28:48.198209    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:28:48.198294    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:28:48.198376    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:28:48.295688    7018 command_runner.go:130] > node/multinode-260000-m02 cordoned
	I0307 10:28:51.318733    7018 command_runner.go:130] > pod "busybox-6b86dd6d48-dmrds" has DeletionTimestamp older than 1 seconds, skipping
	I0307 10:28:51.318748    7018 command_runner.go:130] > node/multinode-260000-m02 drained
	I0307 10:28:51.319712    7018 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0307 10:28:51.319724    7018 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-z6kqp, kube-system/kube-proxy-pxshj
	I0307 10:28:51.319743    7018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl drain multinode-260000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.121678108s)
	I0307 10:28:51.319753    7018 node.go:109] successfully drained node "m02"
	I0307 10:28:51.320044    7018 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:28:51.320243    7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:28:51.320537    7018 request.go:1171] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0307 10:28:51.320569    7018 round_trippers.go:463] DELETE https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:51.320574    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:51.320580    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:51.320586    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:51.320592    7018 round_trippers.go:473]     Content-Type: application/json
	I0307 10:28:51.323598    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:51.323609    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:51.323615    7018 round_trippers.go:580]     Audit-Id: d4c330be-b2e7-4781-aecc-cf162ed512f1
	I0307 10:28:51.323620    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:51.323625    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:51.323630    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:51.323636    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:51.323643    7018 round_trippers.go:580]     Content-Length: 171
	I0307 10:28:51.323649    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:51 GMT
	I0307 10:28:51.323663    7018 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-260000-m02","kind":"nodes","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b"}}
	I0307 10:28:51.323690    7018 node.go:125] successfully deleted node "m02"
	I0307 10:28:51.323697    7018 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 10:28:51.323715    7018 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 10:28:51.323731    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-260000-m02"
	I0307 10:28:51.374604    7018 command_runner.go:130] ! W0307 18:28:51.510767    1198 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:51.505076    7018 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 10:28:53.147207    7018 command_runner.go:130] > [preflight] Running pre-flight checks
	I0307 10:28:53.147229    7018 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0307 10:28:53.147240    7018 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0307 10:28:53.147249    7018 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 10:28:53.147258    7018 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 10:28:53.147266    7018 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0307 10:28:53.147275    7018 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0307 10:28:53.147285    7018 command_runner.go:130] > This node has joined the cluster:
	I0307 10:28:53.147294    7018 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0307 10:28:53.147304    7018 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0307 10:28:53.147313    7018 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0307 10:28:53.147327    7018 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-260000-m02": (1.823577721s)
	I0307 10:28:53.147343    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0307 10:28:53.256139    7018 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0307 10:28:53.347575    7018 start.go:303] JoinCluster complete in 5.245201975s
	I0307 10:28:53.347588    7018 cni.go:84] Creating CNI manager for ""
	I0307 10:28:53.347594    7018 cni.go:136] 3 nodes found, recommending kindnet
	I0307 10:28:53.347676    7018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 10:28:53.350863    7018 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0307 10:28:53.350874    7018 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0307 10:28:53.350882    7018 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0307 10:28:53.350888    7018 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 10:28:53.350895    7018 command_runner.go:130] > Access: 2023-03-07 18:27:25.800133630 +0000
	I0307 10:28:53.350899    7018 command_runner.go:130] > Modify: 2023-02-24 23:58:49.000000000 +0000
	I0307 10:28:53.350904    7018 command_runner.go:130] > Change: 2023-03-07 18:27:24.520133706 +0000
	I0307 10:28:53.350907    7018 command_runner.go:130] >  Birth: -
	I0307 10:28:53.350976    7018 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0307 10:28:53.350986    7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0307 10:28:53.365774    7018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 10:28:53.573328    7018 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0307 10:28:53.576007    7018 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0307 10:28:53.577626    7018 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0307 10:28:53.586569    7018 command_runner.go:130] > daemonset.apps/kindnet configured
	I0307 10:28:53.588317    7018 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:28:53.588503    7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:28:53.588731    7018 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 10:28:53.588737    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:53.588744    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:53.588750    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:53.590037    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:53.590045    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:53.590053    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:53.590058    7018 round_trippers.go:580]     Content-Length: 292
	I0307 10:28:53.590065    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:53 GMT
	I0307 10:28:53.590074    7018 round_trippers.go:580]     Audit-Id: 09b51ea0-529c-4d47-a052-cef6398d810c
	I0307 10:28:53.590096    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:53.590105    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:53.590110    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:53.590121    7018 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9058bb7-5525-4245-a92a-3b0f0144c5d4","resourceVersion":"1155","creationTimestamp":"2023-03-07T18:18:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0307 10:28:53.590164    7018 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-260000" context rescaled to 1 replicas
	I0307 10:28:53.590178    7018 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 10:28:53.633568    7018 out.go:177] * Verifying Kubernetes components...
	I0307 10:28:53.691468    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:28:53.703497    7018 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:28:53.703698    7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:28:53.703918    7018 node_ready.go:35] waiting up to 6m0s for node "multinode-260000-m02" to be "Ready" ...
	I0307 10:28:53.703963    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:53.703968    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:53.703974    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:53.703981    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:53.705420    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:53.705433    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:53.705439    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:53.705445    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:53 GMT
	I0307 10:28:53.705455    7018 round_trippers.go:580]     Audit-Id: e2d373c1-190f-45e0-b9cf-3d8d054fb1e3
	I0307 10:28:53.705460    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:53.705465    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:53.705470    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:53.705557    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:54.205959    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:54.205976    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:54.205988    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:54.205995    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:54.208023    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:54.208036    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:54.208042    7018 round_trippers.go:580]     Audit-Id: 162bfd38-128d-4c94-8620-4dd73b77dd1a
	I0307 10:28:54.208050    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:54.208055    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:54.208065    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:54.208073    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:54.208080    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:54 GMT
	I0307 10:28:54.208268    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:54.706066    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:54.706077    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:54.706084    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:54.706089    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:54.708076    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:54.708088    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:54.708095    7018 round_trippers.go:580]     Audit-Id: dd80323e-e17e-4577-b133-2911fcce9fc1
	I0307 10:28:54.708100    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:54.708105    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:54.708110    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:54.708115    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:54.708120    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:54 GMT
	I0307 10:28:54.708207    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:55.206158    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:55.206172    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:55.206179    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:55.206184    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:55.207805    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:55.207815    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:55.207820    7018 round_trippers.go:580]     Audit-Id: 9200c148-32d8-4985-98ec-72d4b636ae7e
	I0307 10:28:55.207825    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:55.207831    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:55.207835    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:55.207840    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:55.207845    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:55 GMT
	I0307 10:28:55.207923    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:55.706104    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:55.706119    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:55.706125    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:55.706131    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:55.707769    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:55.707783    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:55.707791    7018 round_trippers.go:580]     Audit-Id: 0773193b-a44b-4173-a89e-1b4397280289
	I0307 10:28:55.707797    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:55.707803    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:55.707808    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:55.707813    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:55.707818    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:55 GMT
	I0307 10:28:55.707892    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:55.708076    7018 node_ready.go:58] node "multinode-260000-m02" has status "Ready":"False"
	I0307 10:28:56.205958    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:56.205974    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:56.205981    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:56.205986    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:56.207374    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:56.207390    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:56.207399    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:56.207406    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:56.207412    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:56 GMT
	I0307 10:28:56.207418    7018 round_trippers.go:580]     Audit-Id: 0b890c7d-2626-4ab5-8e75-3a16b9eecf54
	I0307 10:28:56.207427    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:56.207433    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:56.207515    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:56.705900    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:56.705916    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:56.705923    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:56.705928    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:56.707741    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:56.707756    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:56.707766    7018 round_trippers.go:580]     Audit-Id: 0e59b396-e7bf-4b72-b74c-a01f645f9864
	I0307 10:28:56.707778    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:56.707804    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:56.707821    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:56.707834    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:56.707842    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:56 GMT
	I0307 10:28:56.707912    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
	I0307 10:28:57.206205    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:57.206216    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:57.206228    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:57.206234    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:57.207878    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:57.207889    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:57.207894    7018 round_trippers.go:580]     Audit-Id: a4dcdc28-4a89-41fc-a490-5614c72a2f7c
	I0307 10:28:57.207900    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:57.207905    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:57.207913    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:57.207918    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:57.207923    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:57 GMT
	I0307 10:28:57.208010    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
	I0307 10:28:57.706332    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:57.727379    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:57.727424    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:57.727437    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:57.731183    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:57.731198    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:57.731206    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:57 GMT
	I0307 10:28:57.731221    7018 round_trippers.go:580]     Audit-Id: f535ff1c-e3e0-4a4e-acf9-6dabcd316387
	I0307 10:28:57.731231    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:57.731241    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:57.731249    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:57.731255    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:57.731338    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
	I0307 10:28:57.731568    7018 node_ready.go:58] node "multinode-260000-m02" has status "Ready":"False"
	I0307 10:28:58.206943    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:58.206954    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.206960    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.206966    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.208597    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.208612    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.208617    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.208623    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.208628    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.208633    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.208638    7018 round_trippers.go:580]     Audit-Id: 14bb95b4-52c5-49f6-baee-19c30e38be33
	I0307 10:28:58.208643    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.208733    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1235","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4619 chars]
	I0307 10:28:58.208922    7018 node_ready.go:49] node "multinode-260000-m02" has status "Ready":"True"
	I0307 10:28:58.208932    7018 node_ready.go:38] duration metric: took 4.5049847s waiting for node "multinode-260000-m02" to be "Ready" ...
	I0307 10:28:58.208937    7018 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:58.208966    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:58.208970    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.208977    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.208983    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.211168    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:58.211181    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.211186    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.211192    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.211200    7018 round_trippers.go:580]     Audit-Id: 9e29ae0f-c0b8-46e2-b2ef-ac7c8b7cd885
	I0307 10:28:58.211206    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.211211    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.211218    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.212031    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1235"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83248 chars]
	I0307 10:28:58.213928    7018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.213959    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:58.213966    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.213972    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.213977    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.215266    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.215275    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.215280    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.215285    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.215299    7018 round_trippers.go:580]     Audit-Id: da0297af-ddf8-40bb-ba7e-ee7c25d1d50b
	I0307 10:28:58.215307    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.215315    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.215322    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.215421    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6489 chars]
	I0307 10:28:58.215654    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.215660    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.215667    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.215673    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.217001    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.217011    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.217018    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.217023    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.217030    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.217035    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.217044    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.217052    7018 round_trippers.go:580]     Audit-Id: bcd5819d-b6c4-402c-84d8-8b34af188a85
	I0307 10:28:58.217231    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:58.217408    7018 pod_ready.go:92] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:58.217413    7018 pod_ready.go:81] duration metric: took 3.477588ms waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.217418    7018 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.217449    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
	I0307 10:28:58.217455    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.217463    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.217469    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.218541    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.218548    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.218553    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.218559    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.218569    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.218574    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.218579    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.218584    7018 round_trippers.go:580]     Audit-Id: 3ecf0cc4-5524-4969-bf64-78cbfa7bcc64
	I0307 10:28:58.218670    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"1080","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6056 chars]
	I0307 10:28:58.218878    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.218884    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.218890    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.218895    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.220222    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.220239    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.220246    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.220251    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.220256    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.220262    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.220268    7018 round_trippers.go:580]     Audit-Id: 16035865-fbff-46a4-82b6-1d4dc225f856
	I0307 10:28:58.220272    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.220340    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:58.220511    7018 pod_ready.go:92] pod "etcd-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:58.220516    7018 pod_ready.go:81] duration metric: took 3.092542ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.220524    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.220551    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
	I0307 10:28:58.220555    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.220561    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.220566    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.221715    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.221722    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.221727    7018 round_trippers.go:580]     Audit-Id: db547fd7-e43b-49f4-9206-870682ba8ead
	I0307 10:28:58.221738    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.221744    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.221749    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.221754    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.221769    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.221904    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"1143","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7591 chars]
	I0307 10:28:58.222136    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.222142    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.222148    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.222153    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.223204    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.223213    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.223218    7018 round_trippers.go:580]     Audit-Id: af2553b3-7312-4d2a-a007-6b34fbaa60fe
	I0307 10:28:58.223223    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.223229    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.223233    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.223239    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.223243    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.223402    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:58.223567    7018 pod_ready.go:92] pod "kube-apiserver-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:58.223572    7018 pod_ready.go:81] duration metric: took 3.043676ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.223578    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.223603    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
	I0307 10:28:58.223607    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.223624    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.223632    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.224832    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.224840    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.224845    7018 round_trippers.go:580]     Audit-Id: 08c9fdf6-3267-4e2e-935f-9c4e84582ec5
	I0307 10:28:58.224850    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.224859    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.224864    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.224869    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.224874    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.225199    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1131","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7159 chars]
	I0307 10:28:58.225429    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.225437    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.225443    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.225449    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.226687    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.226694    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.226699    7018 round_trippers.go:580]     Audit-Id: 7796790d-620c-401a-9f3a-b4ce8b9acc5f
	I0307 10:28:58.226704    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.226710    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.226714    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.226719    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.226725    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.226885    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:58.227057    7018 pod_ready.go:92] pod "kube-controller-manager-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:58.227062    7018 pod_ready.go:81] duration metric: took 3.479487ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.227067    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.407059    7018 request.go:622] Waited for 179.951206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
	I0307 10:28:58.407094    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
	I0307 10:28:58.407101    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.407154    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.407160    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.408789    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.408801    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.408809    7018 round_trippers.go:580]     Audit-Id: c45ed864-b7ed-4df5-a14e-1c1a9c154846
	I0307 10:28:58.408817    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.408824    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.408829    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.408834    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.408845    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.409069    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"1061","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0307 10:28:58.608673    7018 request.go:622] Waited for 199.329269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.608848    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.608860    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.608872    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.608882    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.611654    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:58.611670    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.611677    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.611684    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.611692    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.611701    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.611709    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.611715    7018 round_trippers.go:580]     Audit-Id: 76524fea-611e-49f8-bb7e-5eb3dc168072
	I0307 10:28:58.611840    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:58.612099    7018 pod_ready.go:92] pod "kube-proxy-8qwhq" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:58.612108    7018 pod_ready.go:81] duration metric: took 385.031837ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.612116    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.808367    7018 request.go:622] Waited for 196.171802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:58.808492    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:58.808504    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.808517    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.808529    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.811399    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:58.811415    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.811423    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.811429    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.811436    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.811442    7018 round_trippers.go:580]     Audit-Id: 3bbb7a3c-520d-4a16-9e4e-62fab5920986
	I0307 10:28:58.811449    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.811455    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.811559    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"1218","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0307 10:28:59.008406    7018 request.go:622] Waited for 196.512217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:59.008597    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:59.008608    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:59.008621    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:59.008631    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:59.011231    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:59.011250    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:59.011258    7018 round_trippers.go:580]     Audit-Id: a7a3df8f-11e9-4890-88c0-bd4fb1da521d
	I0307 10:28:59.011266    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:59.011273    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:59.011280    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:59.011289    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:59.011295    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:59 GMT
	I0307 10:28:59.011388    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1235","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4619 chars]
	I0307 10:28:59.011635    7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:59.011645    7018 pod_ready.go:81] duration metric: took 399.518428ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:59.011652    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:59.208322    7018 request.go:622] Waited for 196.555002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:59.208407    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:59.208417    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:59.208432    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:59.208444    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:59.211802    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:59.211825    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:59.211836    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:59.211865    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:59 GMT
	I0307 10:28:59.211875    7018 round_trippers.go:580]     Audit-Id: 80279da3-3584-4856-89d4-205b357cfc2e
	I0307 10:28:59.211901    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:59.211908    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:59.211916    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:59.212031    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0307 10:28:59.407671    7018 request.go:622] Waited for 195.295612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:59.407782    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:59.407790    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:59.407799    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:59.407807    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:59.409534    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:59.409543    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:59.409549    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:59 GMT
	I0307 10:28:59.409562    7018 round_trippers.go:580]     Audit-Id: dced968d-8259-48a8-a369-67bdece8d0ff
	I0307 10:28:59.409577    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:59.409586    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:59.409591    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:59.409597    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:59.409645    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1109","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4330 chars]
	I0307 10:28:59.409824    7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:59.409830    7018 pod_ready.go:81] duration metric: took 398.16179ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:59.409836    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:59.607367    7018 request.go:622] Waited for 197.479712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:59.607426    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:59.607435    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:59.607535    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:59.607549    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:59.610313    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:59.610332    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:59.610344    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:59 GMT
	I0307 10:28:59.610351    7018 round_trippers.go:580]     Audit-Id: 831ac5c9-6a6e-4238-9a57-e226e9d7fa9a
	I0307 10:28:59.610359    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:59.610366    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:59.610373    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:59.610380    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:59.610482    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1139","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4889 chars]
	I0307 10:28:59.807243    7018 request.go:622] Waited for 196.466836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:59.807382    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:59.807393    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:59.807405    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:59.807416    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:59.809503    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:59.809522    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:59.809534    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:59.809565    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:59 GMT
	I0307 10:28:59.809578    7018 round_trippers.go:580]     Audit-Id: 0db6ab63-4a4e-453d-ac64-1584164a0c7d
	I0307 10:28:59.809586    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:59.809593    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:59.809600    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:59.809729    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:59.810013    7018 pod_ready.go:92] pod "kube-scheduler-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:59.810022    7018 pod_ready.go:81] duration metric: took 400.179443ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:59.810030    7018 pod_ready.go:38] duration metric: took 1.60107891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:59.810045    7018 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 10:28:59.810114    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:28:59.818885    7018 system_svc.go:56] duration metric: took 8.836426ms WaitForService to wait for kubelet.
	I0307 10:28:59.818896    7018 kubeadm.go:578] duration metric: took 6.228675231s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0307 10:28:59.818910    7018 node_conditions.go:102] verifying NodePressure condition ...
	I0307 10:29:00.007159    7018 request.go:622] Waited for 188.194062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes
	I0307 10:29:00.007207    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
	I0307 10:29:00.007270    7018 round_trippers.go:469] Request Headers:
	I0307 10:29:00.007282    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:29:00.007294    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:29:00.010101    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:29:00.010120    7018 round_trippers.go:577] Response Headers:
	I0307 10:29:00.010131    7018 round_trippers.go:580]     Audit-Id: 230c0ab3-666e-4727-a5a5-c4ebee390789
	I0307 10:29:00.010139    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:29:00.010146    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:29:00.010153    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:29:00.010162    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:29:00.010174    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:29:00 GMT
	I0307 10:29:00.010474    7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1235"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16317 chars]
	I0307 10:29:00.011046    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:29:00.011058    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:29:00.011066    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:29:00.011071    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:29:00.011075    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:29:00.011082    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:29:00.011087    7018 node_conditions.go:105] duration metric: took 192.17207ms to run NodePressure ...
	I0307 10:29:00.011096    7018 start.go:228] waiting for startup goroutines ...
	I0307 10:29:00.011118    7018 start.go:242] writing updated cluster config ...
	I0307 10:29:00.011876    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:29:00.012002    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:29:00.054733    7018 out.go:177] * Starting worker node multinode-260000-m03 in cluster multinode-260000
	I0307 10:29:00.075685    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:29:00.075744    7018 cache.go:57] Caching tarball of preloaded images
	I0307 10:29:00.075937    7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 10:29:00.075956    7018 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 10:29:00.076097    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:29:00.077109    7018 cache.go:193] Successfully downloaded all kic artifacts
	I0307 10:29:00.077151    7018 start.go:364] acquiring machines lock for multinode-260000-m03: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:29:00.077243    7018 start.go:368] acquired machines lock for "multinode-260000-m03" in 73.572µs
	I0307 10:29:00.077280    7018 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:29:00.077288    7018 fix.go:55] fixHost starting: m03
	I0307 10:29:00.077721    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:29:00.077794    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:29:00.085146    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51690
	I0307 10:29:00.085469    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:29:00.085788    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:29:00.085809    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:29:00.086053    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:29:00.086177    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:00.086254    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetState
	I0307 10:29:00.086348    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:29:00.086412    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid from json: 6959
	I0307 10:29:00.087210    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid 6959 missing from process table
	I0307 10:29:00.087228    7018 fix.go:103] recreateIfNeeded on multinode-260000-m03: state=Stopped err=<nil>
	I0307 10:29:00.087236    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	W0307 10:29:00.087313    7018 fix.go:129] unexpected machine state, will restart: <nil>
	I0307 10:29:00.108838    7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000-m03" ...
	I0307 10:29:00.150753    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .Start
	I0307 10:29:00.151097    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:29:00.151124    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid
	I0307 10:29:00.151193    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Using UUID 79b2bd18-bd15-11ed-8f77-149d997fca88
	I0307 10:29:00.180096    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Generated MAC 12:aa:e8:53:6e:6b
	I0307 10:29:00.180120    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
	I0307 10:29:00.180266    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"79b2bd18-bd15-11ed-8f77-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002c11a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0307 10:29:00.180309    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"79b2bd18-bd15-11ed-8f77-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002c11a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0307 10:29:00.180345    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "79b2bd18-bd15-11ed-8f77-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/multinode-260000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage,/Users/j
enkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
	I0307 10:29:00.180370    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 79b2bd18-bd15-11ed-8f77-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/multinode-260000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/mult
inode-260000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
	I0307 10:29:00.180383    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0307 10:29:00.181671    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Pid is 7128
	I0307 10:29:00.182013    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Attempt 0
	I0307 10:29:00.182028    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:29:00.182112    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid from json: 7128
	I0307 10:29:00.183032    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Searching for 12:aa:e8:53:6e:6b in /var/db/dhcpd_leases ...
	I0307 10:29:00.183093    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0307 10:29:00.183123    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d3d8}
	I0307 10:29:00.183132    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d38e}
	I0307 10:29:00.183144    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
	I0307 10:29:00.183153    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Found match: 12:aa:e8:53:6e:6b
	I0307 10:29:00.183173    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | IP: 192.168.64.15
	I0307 10:29:00.183209    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetConfigRaw
	I0307 10:29:00.183787    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
	I0307 10:29:00.183966    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:29:00.184309    7018 machine.go:88] provisioning docker machine ...
	I0307 10:29:00.184319    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:00.184441    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
	I0307 10:29:00.184532    7018 buildroot.go:166] provisioning hostname "multinode-260000-m03"
	I0307 10:29:00.184543    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
	I0307 10:29:00.184630    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:00.184704    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:00.184784    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:00.184866    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:00.184944    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:00.185055    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:00.185361    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:00.185370    7018 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-260000-m03 && echo "multinode-260000-m03" | sudo tee /etc/hostname
	I0307 10:29:00.188080    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0307 10:29:00.195643    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0307 10:29:00.196371    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:29:00.196384    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:29:00.196392    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:29:00.196404    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:29:00.552977    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0307 10:29:00.552995    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0307 10:29:00.657061    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:29:00.657081    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:29:00.657091    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:29:00.657102    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:29:00.657942    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0307 10:29:00.657953    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0307 10:29:05.166903    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0307 10:29:05.166935    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0307 10:29:05.166942    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0307 10:29:11.261985    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000-m03
	
	I0307 10:29:11.262003    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.262135    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:11.262237    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.262323    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.262404    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:11.262539    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:11.262858    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:11.262870    7018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-260000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-260000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:29:11.336626    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:29:11.336642    7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
	I0307 10:29:11.336650    7018 buildroot.go:174] setting up certificates
	I0307 10:29:11.336658    7018 provision.go:83] configureAuth start
	I0307 10:29:11.336666    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
	I0307 10:29:11.336795    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
	I0307 10:29:11.336894    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.336973    7018 provision.go:138] copyHostCerts
	I0307 10:29:11.337009    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:29:11.337059    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
	I0307 10:29:11.337064    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:29:11.337174    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
	I0307 10:29:11.337363    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:29:11.337395    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
	I0307 10:29:11.337400    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:29:11.337460    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
	I0307 10:29:11.337578    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:29:11.337610    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
	I0307 10:29:11.337615    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:29:11.337670    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
	I0307 10:29:11.337789    7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000-m03 san=[192.168.64.15 192.168.64.15 localhost 127.0.0.1 minikube multinode-260000-m03]
	I0307 10:29:11.427111    7018 provision.go:172] copyRemoteCerts
	I0307 10:29:11.427165    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:29:11.427179    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.427324    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:11.427419    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.427541    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:11.427623    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
	I0307 10:29:11.465606    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0307 10:29:11.465676    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 10:29:11.481351    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0307 10:29:11.481417    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0307 10:29:11.496933    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0307 10:29:11.496996    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 10:29:11.512347    7018 provision.go:86] duration metric: configureAuth took 175.680754ms
	I0307 10:29:11.512360    7018 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:29:11.512526    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:29:11.512539    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:11.512663    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.512758    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:11.512840    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.512918    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.512998    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:11.513100    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:11.513391    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:11.513399    7018 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:29:11.579311    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:29:11.579323    7018 buildroot.go:70] root file system type: tmpfs
	I0307 10:29:11.579401    7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:29:11.579411    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.579540    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:11.579641    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.579740    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.579829    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:11.579956    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:11.580270    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:11.580316    7018 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.64.12"
	Environment="NO_PROXY=192.168.64.12,192.168.64.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:29:11.652702    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.64.12
	Environment=NO_PROXY=192.168.64.12,192.168.64.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:29:11.652720    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.652848    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:11.652922    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.653006    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.653098    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:11.653250    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:11.653560    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:11.653573    7018 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:29:12.175360    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 10:29:12.175374    7018 machine.go:91] provisioned docker machine in 11.991002684s
	I0307 10:29:12.175381    7018 start.go:300] post-start starting for "multinode-260000-m03" (driver="hyperkit")
	I0307 10:29:12.175386    7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:29:12.175396    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:12.175581    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:29:12.175596    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:12.175686    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:12.175759    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:12.175827    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:12.175912    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
	I0307 10:29:12.214369    7018 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:29:12.216755    7018 command_runner.go:130] > NAME=Buildroot
	I0307 10:29:12.216767    7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
	I0307 10:29:12.216773    7018 command_runner.go:130] > ID=buildroot
	I0307 10:29:12.216793    7018 command_runner.go:130] > VERSION_ID=2021.02.12
	I0307 10:29:12.216800    7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0307 10:29:12.216963    7018 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 10:29:12.216972    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
	I0307 10:29:12.217057    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
	I0307 10:29:12.217200    7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
	I0307 10:29:12.217206    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
	I0307 10:29:12.217370    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:29:12.223606    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:29:12.239878    7018 start.go:303] post-start completed in 64.487773ms
	I0307 10:29:12.239896    7018 fix.go:57] fixHost completed within 12.162546961s
	I0307 10:29:12.239910    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:12.240038    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:12.240131    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:12.240212    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:12.240290    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:12.240409    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:12.240714    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:12.240722    7018 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 10:29:12.305514    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213752.437212482
	
	I0307 10:29:12.305525    7018 fix.go:207] guest clock: 1678213752.437212482
	I0307 10:29:12.305531    7018 fix.go:220] Guest: 2023-03-07 10:29:12.437212482 -0800 PST Remote: 2023-03-07 10:29:12.239899 -0800 PST m=+114.574278242 (delta=197.313482ms)
	I0307 10:29:12.305540    7018 fix.go:191] guest clock delta is within tolerance: 197.313482ms
	I0307 10:29:12.305543    7018 start.go:83] releasing machines lock for "multinode-260000-m03", held for 12.228234634s
	I0307 10:29:12.305562    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:12.305681    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
	I0307 10:29:12.327827    7018 out.go:177] * Found network options:
	I0307 10:29:12.349261    7018 out.go:177]   - NO_PROXY=192.168.64.12,192.168.64.13
	W0307 10:29:12.371206    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 10:29:12.371232    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 10:29:12.371252    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:12.372006    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:12.372213    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:12.372340    7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 10:29:12.372393    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	W0307 10:29:12.372424    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 10:29:12.372448    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 10:29:12.372546    7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 10:29:12.372566    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:12.372582    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:12.372778    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:12.372789    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:12.372944    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:12.372988    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:12.373142    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
	I0307 10:29:12.373168    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:12.373363    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
	I0307 10:29:12.410014    7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0307 10:29:12.410159    7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:29:12.410222    7018 ssh_runner.go:195] Run: which cri-dockerd
	I0307 10:29:12.452473    7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 10:29:12.452552    7018 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 10:29:12.452679    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 10:29:12.459245    7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 10:29:12.470219    7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 10:29:12.486201    7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0307 10:29:12.486242    7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 10:29:12.486250    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:29:12.486346    7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:29:12.502691    7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
	I0307 10:29:12.502703    7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 10:29:12.502708    7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 10:29:12.502712    7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 10:29:12.502716    7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 10:29:12.502719    7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 10:29:12.502723    7018 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 10:29:12.502728    7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 10:29:12.502732    7018 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0307 10:29:12.502737    7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:29:12.503864    7018 docker.go:630] Got preloaded images: -- stdout --
	kindest/kindnetd:v20230227-15197099
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 10:29:12.503874    7018 docker.go:560] Images already preloaded, skipping extraction
	I0307 10:29:12.503880    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:29:12.503940    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:29:12.523327    7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:29:12.523340    7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:29:12.524671    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 10:29:12.536597    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:29:12.544140    7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:29:12.544193    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:29:12.550489    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:29:12.556842    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:29:12.563095    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:29:12.569445    7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:29:12.575946    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:29:12.582556    7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:29:12.588055    7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 10:29:12.588181    7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:29:12.594025    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:29:12.673337    7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:29:12.685510    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:29:12.685584    7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:29:12.695059    7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0307 10:29:12.696323    7018 command_runner.go:130] > [Unit]
	I0307 10:29:12.696352    7018 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 10:29:12.696362    7018 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 10:29:12.696367    7018 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0307 10:29:12.696371    7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0307 10:29:12.696375    7018 command_runner.go:130] > StartLimitBurst=3
	I0307 10:29:12.696382    7018 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 10:29:12.696388    7018 command_runner.go:130] > [Service]
	I0307 10:29:12.696393    7018 command_runner.go:130] > Type=notify
	I0307 10:29:12.696397    7018 command_runner.go:130] > Restart=on-failure
	I0307 10:29:12.696402    7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12
	I0307 10:29:12.696406    7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12,192.168.64.13
	I0307 10:29:12.696413    7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 10:29:12.696422    7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 10:29:12.696428    7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 10:29:12.696433    7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 10:29:12.696439    7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 10:29:12.696445    7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 10:29:12.696454    7018 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 10:29:12.696462    7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 10:29:12.696468    7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 10:29:12.696471    7018 command_runner.go:130] > ExecStart=
	I0307 10:29:12.696485    7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0307 10:29:12.696489    7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 10:29:12.696497    7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 10:29:12.696503    7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 10:29:12.696506    7018 command_runner.go:130] > LimitNOFILE=infinity
	I0307 10:29:12.696510    7018 command_runner.go:130] > LimitNPROC=infinity
	I0307 10:29:12.696514    7018 command_runner.go:130] > LimitCORE=infinity
	I0307 10:29:12.696519    7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 10:29:12.696524    7018 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 10:29:12.696527    7018 command_runner.go:130] > TasksMax=infinity
	I0307 10:29:12.696531    7018 command_runner.go:130] > TimeoutStartSec=0
	I0307 10:29:12.696536    7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 10:29:12.696540    7018 command_runner.go:130] > Delegate=yes
	I0307 10:29:12.696549    7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 10:29:12.696553    7018 command_runner.go:130] > KillMode=process
	I0307 10:29:12.696557    7018 command_runner.go:130] > [Install]
	I0307 10:29:12.696562    7018 command_runner.go:130] > WantedBy=multi-user.target
	I0307 10:29:12.696635    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:29:12.705902    7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:29:12.738895    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:29:12.747844    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:29:12.756435    7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 10:29:12.775075    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:29:12.783647    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:29:12.795348    7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:29:12.795358    7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:29:12.795646    7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:29:12.877113    7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:29:12.966218    7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:29:12.966234    7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 10:29:12.977829    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:29:13.058533    7018 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:30:14.087064    7018 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0307 10:30:14.087078    7018 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I0307 10:30:14.087168    7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.028339517s)
	I0307 10:30:14.108918    7018 out.go:177] 
	W0307 10:30:14.130829    7018 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0307 10:30:14.130853    7018 out.go:239] * 
	* 
	W0307 10:30:14.131956    7018 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:30:14.211985    7018 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-260000" : exit status 90
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-260000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-260000 -n multinode-260000
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-260000 logs -n 25: (2.872673251s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                            |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-260000 ssh -n                                                                                                    | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-260000 cp multinode-260000-m02:/home/docker/cp-test.txt                                                          | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile946595065/001/cp-test_multinode-260000-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n                                                                                                    | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-260000 cp multinode-260000-m02:/home/docker/cp-test.txt                                                          | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000:/home/docker/cp-test_multinode-260000-m02_multinode-260000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n                                                                                                    | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n multinode-260000 sudo cat                                                                          | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | /home/docker/cp-test_multinode-260000-m02_multinode-260000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-260000 cp multinode-260000-m02:/home/docker/cp-test.txt                                                          | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m03:/home/docker/cp-test_multinode-260000-m02_multinode-260000-m03.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n                                                                                                    | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n multinode-260000-m03 sudo cat                                                                      | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | /home/docker/cp-test_multinode-260000-m02_multinode-260000-m03.txt                                                         |                  |         |         |                     |                     |
	| cp      | multinode-260000 cp testdata/cp-test.txt                                                                                   | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m03:/home/docker/cp-test.txt                                                                              |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n                                                                                                    | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-260000 cp multinode-260000-m03:/home/docker/cp-test.txt                                                          | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile946595065/001/cp-test_multinode-260000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n                                                                                                    | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-260000 cp multinode-260000-m03:/home/docker/cp-test.txt                                                          | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000:/home/docker/cp-test_multinode-260000-m03_multinode-260000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n                                                                                                    | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n multinode-260000 sudo cat                                                                          | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | /home/docker/cp-test_multinode-260000-m03_multinode-260000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-260000 cp multinode-260000-m03:/home/docker/cp-test.txt                                                          | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m02:/home/docker/cp-test_multinode-260000-m03_multinode-260000-m02.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n                                                                                                    | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | multinode-260000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-260000 ssh -n multinode-260000-m02 sudo cat                                                                      | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | /home/docker/cp-test_multinode-260000-m03_multinode-260000-m02.txt                                                         |                  |         |         |                     |                     |
	| node    | multinode-260000 node stop m03                                                                                             | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	| node    | multinode-260000 node start                                                                                                | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:26 PST |
	|         | m03 --alsologtostderr                                                                                                      |                  |         |         |                     |                     |
	| node    | list -p multinode-260000                                                                                                   | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST |                     |
	| stop    | -p multinode-260000                                                                                                        | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:26 PST | 07 Mar 23 10:27 PST |
	| start   | -p multinode-260000                                                                                                        | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:27 PST |                     |
	|         | --wait=true -v=8                                                                                                           |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                          |                  |         |         |                     |                     |
	| node    | list -p multinode-260000                                                                                                   | multinode-260000 | jenkins | v1.29.0 | 07 Mar 23 10:30 PST |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/07 10:27:17
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 10:27:17.701567    7018 out.go:296] Setting OutFile to fd 1 ...
	I0307 10:27:17.701766    7018 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:27:17.701771    7018 out.go:309] Setting ErrFile to fd 2...
	I0307 10:27:17.701775    7018 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:27:17.701881    7018 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
	I0307 10:27:17.703156    7018 out.go:303] Setting JSON to false
	I0307 10:27:17.723710    7018 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3412,"bootTime":1678210225,"procs":381,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:27:17.723849    7018 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0307 10:27:17.767920    7018 out.go:177] * [multinode-260000] minikube v1.29.0 on Darwin 13.2.1
	I0307 10:27:17.789379    7018 notify.go:220] Checking for updates...
	I0307 10:27:17.811044    7018 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 10:27:17.832029    7018 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:27:17.853161    7018 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:27:17.875122    7018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:27:17.896016    7018 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	I0307 10:27:17.917197    7018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:27:17.939813    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:27:17.939897    7018 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 10:27:17.940536    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:27:17.940612    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:27:17.948145    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51638
	I0307 10:27:17.948508    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:27:17.948945    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:27:17.948957    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:27:17.949170    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:27:17.949257    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:17.976910    7018 out.go:177] * Using the hyperkit driver based on existing profile
	I0307 10:27:18.019030    7018 start.go:296] selected driver: hyperkit
	I0307 10:27:18.019085    7018 start.go:857] validating driver "hyperkit" against &{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false
inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP:}
	I0307 10:27:18.019304    7018 start.go:868] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:27:18.019411    7018 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:27:18.019612    7018 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15985-3430/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0307 10:27:18.027551    7018 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.29.0
	I0307 10:27:18.031921    7018 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:27:18.031941    7018 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0307 10:27:18.034844    7018 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:27:18.034876    7018 cni.go:84] Creating CNI manager for ""
	I0307 10:27:18.034887    7018 cni.go:136] 3 nodes found, recommending kindnet
	I0307 10:27:18.034896    7018 start_flags.go:319] config:
	{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false
kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 10:27:18.035029    7018 iso.go:125] acquiring lock: {Name:mk7e0ac9e85418e0580033b84b7097185a725e89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:27:18.076950    7018 out.go:177] * Starting control plane node multinode-260000 in cluster multinode-260000
	I0307 10:27:18.098026    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:27:18.098116    7018 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0307 10:27:18.098148    7018 cache.go:57] Caching tarball of preloaded images
	I0307 10:27:18.098313    7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 10:27:18.098333    7018 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 10:27:18.098530    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:27:18.099358    7018 cache.go:193] Successfully downloaded all kic artifacts
	I0307 10:27:18.099407    7018 start.go:364] acquiring machines lock for multinode-260000: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:27:18.099512    7018 start.go:368] acquired machines lock for "multinode-260000" in 86.293µs
	I0307 10:27:18.099554    7018 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:27:18.099566    7018 fix.go:55] fixHost starting: 
	I0307 10:27:18.100062    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:27:18.100091    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:27:18.107480    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51640
	I0307 10:27:18.107803    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:27:18.108127    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:27:18.108137    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:27:18.108326    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:27:18.108443    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:18.108543    7018 main.go:141] libmachine: (multinode-260000) Calling .GetState
	I0307 10:27:18.108624    7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:27:18.108709    7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid from json: 6235
	I0307 10:27:18.109465    7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid 6235 missing from process table
	I0307 10:27:18.109498    7018 fix.go:103] recreateIfNeeded on multinode-260000: state=Stopped err=<nil>
	I0307 10:27:18.109518    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	W0307 10:27:18.109599    7018 fix.go:129] unexpected machine state, will restart: <nil>
	I0307 10:27:18.130859    7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000" ...
	I0307 10:27:18.151952    7018 main.go:141] libmachine: (multinode-260000) Calling .Start
	I0307 10:27:18.152162    7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:27:18.152193    7018 main.go:141] libmachine: (multinode-260000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid
	I0307 10:27:18.153359    7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid 6235 missing from process table
	I0307 10:27:18.153369    7018 main.go:141] libmachine: (multinode-260000) DBG | pid 6235 is in state "Stopped"
	I0307 10:27:18.153384    7018 main.go:141] libmachine: (multinode-260000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid...
	I0307 10:27:18.153520    7018 main.go:141] libmachine: (multinode-260000) DBG | Using UUID 6086a850-bd14-11ed-9c3c-149d997fca88
	I0307 10:27:18.261699    7018 main.go:141] libmachine: (multinode-260000) DBG | Generated MAC f2:4e:cd:75:18:a7
	I0307 10:27:18.261738    7018 main.go:141] libmachine: (multinode-260000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
	I0307 10:27:18.261843    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6086a850-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ecbd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0307 10:27:18.261893    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6086a850-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ecbd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0307 10:27:18.261955    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6086a850-bd14-11ed-9c3c-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/multinode-260000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage,/Users/jenkins/minikube-integration/1598
5-3430/.minikube/machines/multinode-260000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
	I0307 10:27:18.262040    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6086a850-bd14-11ed-9c3c-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/multinode-260000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
	I0307 10:27:18.262064    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0307 10:27:18.263449    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 DEBUG: hyperkit: Pid is 7033
	I0307 10:27:18.263845    7018 main.go:141] libmachine: (multinode-260000) DBG | Attempt 0
	I0307 10:27:18.263868    7018 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:27:18.263948    7018 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid from json: 7033
	I0307 10:27:18.265382    7018 main.go:141] libmachine: (multinode-260000) DBG | Searching for f2:4e:cd:75:18:a7 in /var/db/dhcpd_leases ...
	I0307 10:27:18.265430    7018 main.go:141] libmachine: (multinode-260000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0307 10:27:18.265476    7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
	I0307 10:27:18.265490    7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
	I0307 10:27:18.265519    7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d194}
	I0307 10:27:18.265530    7018 main.go:141] libmachine: (multinode-260000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d15a}
	I0307 10:27:18.265540    7018 main.go:141] libmachine: (multinode-260000) DBG | Found match: f2:4e:cd:75:18:a7
	I0307 10:27:18.265548    7018 main.go:141] libmachine: (multinode-260000) DBG | IP: 192.168.64.12
	I0307 10:27:18.265590    7018 main.go:141] libmachine: (multinode-260000) Calling .GetConfigRaw
	I0307 10:27:18.266196    7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
	I0307 10:27:18.266384    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:27:18.266657    7018 machine.go:88] provisioning docker machine ...
	I0307 10:27:18.266667    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:18.266773    7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
	I0307 10:27:18.266878    7018 buildroot.go:166] provisioning hostname "multinode-260000"
	I0307 10:27:18.266892    7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
	I0307 10:27:18.266989    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:18.267073    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:18.267172    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:18.267250    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:18.267341    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:18.267461    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:18.267830    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:18.267839    7018 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-260000 && echo "multinode-260000" | sudo tee /etc/hostname
	I0307 10:27:18.269902    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0307 10:27:18.319277    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0307 10:27:18.319873    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:27:18.319886    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:27:18.319904    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:27:18.319918    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:27:18.674514    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0307 10:27:18.674532    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0307 10:27:18.778516    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:27:18.778535    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:27:18.778566    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:27:18.778585    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:27:18.779423    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0307 10:27:18.779434    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0307 10:27:23.282731    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0307 10:27:23.282756    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0307 10:27:23.282762    7018 main.go:141] libmachine: (multinode-260000) DBG | 2023/03/07 10:27:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0307 10:27:53.345501    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000
	
	I0307 10:27:53.345516    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.345641    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:53.345737    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.345814    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.345897    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:53.346017    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:53.346336    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:53.346349    7018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-260000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-260000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:27:53.408248    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:27:53.408267    7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
	I0307 10:27:53.408279    7018 buildroot.go:174] setting up certificates
	I0307 10:27:53.408288    7018 provision.go:83] configureAuth start
	I0307 10:27:53.408298    7018 main.go:141] libmachine: (multinode-260000) Calling .GetMachineName
	I0307 10:27:53.408431    7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
	I0307 10:27:53.408534    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.408622    7018 provision.go:138] copyHostCerts
	I0307 10:27:53.408658    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:27:53.408716    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
	I0307 10:27:53.408724    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:27:53.408836    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
	I0307 10:27:53.409016    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:27:53.409051    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
	I0307 10:27:53.409056    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:27:53.409119    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
	I0307 10:27:53.409268    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:27:53.409298    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
	I0307 10:27:53.409303    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:27:53.409364    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
	I0307 10:27:53.409496    7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000 san=[192.168.64.12 192.168.64.12 localhost 127.0.0.1 minikube multinode-260000]
	I0307 10:27:53.471318    7018 provision.go:172] copyRemoteCerts
	I0307 10:27:53.471371    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:27:53.471386    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.471501    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:53.471590    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.471685    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:53.471784    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:27:53.506343    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0307 10:27:53.506415    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 10:27:53.522448    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0307 10:27:53.522505    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0307 10:27:53.538178    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0307 10:27:53.538241    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 10:27:53.554443    7018 provision.go:86] duration metric: configureAuth took 146.138879ms
	I0307 10:27:53.554456    7018 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:27:53.554627    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:27:53.554640    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:53.554773    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.554871    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:53.554956    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.555028    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.555105    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:53.555212    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:53.555523    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:53.555532    7018 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:27:53.611701    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:27:53.611715    7018 buildroot.go:70] root file system type: tmpfs
	I0307 10:27:53.611791    7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:27:53.611806    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.611930    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:53.612020    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.612103    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.612184    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:53.612317    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:53.612630    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:53.612673    7018 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:27:53.678288    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:27:53.678311    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:53.678443    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:53.678532    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.678617    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:53.678712    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:53.678844    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:53.679161    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:53.679175    7018 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:27:54.321619    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 10:27:54.321632    7018 machine.go:91] provisioned docker machine in 36.054802092s
	I0307 10:27:54.321643    7018 start.go:300] post-start starting for "multinode-260000" (driver="hyperkit")
	I0307 10:27:54.321648    7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:27:54.321659    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:54.321839    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:27:54.321852    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:54.321961    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:54.322042    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:54.322149    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:54.322246    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:27:54.357925    7018 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:27:54.360302    7018 command_runner.go:130] > NAME=Buildroot
	I0307 10:27:54.360311    7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
	I0307 10:27:54.360321    7018 command_runner.go:130] > ID=buildroot
	I0307 10:27:54.360325    7018 command_runner.go:130] > VERSION_ID=2021.02.12
	I0307 10:27:54.360330    7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0307 10:27:54.360498    7018 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 10:27:54.360509    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
	I0307 10:27:54.360589    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
	I0307 10:27:54.360737    7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
	I0307 10:27:54.360743    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
	I0307 10:27:54.360917    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:27:54.366509    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:27:54.382252    7018 start.go:303] post-start completed in 60.601074ms
	I0307 10:27:54.382265    7018 fix.go:57] fixHost completed within 36.282535453s
	I0307 10:27:54.382281    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:54.382411    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:54.382494    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:54.382592    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:54.382687    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:54.382812    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:27:54.383114    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0307 10:27:54.383122    7018 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 10:27:54.438352    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213674.566046378
	
	I0307 10:27:54.438363    7018 fix.go:207] guest clock: 1678213674.566046378
	I0307 10:27:54.438368    7018 fix.go:220] Guest: 2023-03-07 10:27:54.566046378 -0800 PST Remote: 2023-03-07 10:27:54.382269 -0800 PST m=+36.717005002 (delta=183.777378ms)
	I0307 10:27:54.438390    7018 fix.go:191] guest clock delta is within tolerance: 183.777378ms
	I0307 10:27:54.438395    7018 start.go:83] releasing machines lock for "multinode-260000", held for 36.33870613s
	I0307 10:27:54.438412    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:54.438533    7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
	I0307 10:27:54.438635    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:54.438919    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:54.439021    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:27:54.439107    7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 10:27:54.439131    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:54.439139    7018 ssh_runner.go:195] Run: cat /version.json
	I0307 10:27:54.439150    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:27:54.439230    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:54.439270    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:27:54.439355    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:54.439367    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:27:54.439464    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:54.439484    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:27:54.439556    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:27:54.439569    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:27:54.469202    7018 command_runner.go:130] > {"iso_version": "v1.29.0-1677261626-15923", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "d5f8b7c14d0e3cd88db476786b15ed1c8f7b9a62"}
	I0307 10:27:54.469345    7018 ssh_runner.go:195] Run: systemctl --version
	I0307 10:27:54.473110    7018 command_runner.go:130] > systemd 247 (247)
	I0307 10:27:54.473123    7018 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0307 10:27:54.510321    7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 10:27:54.511264    7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 10:27:54.515706    7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0307 10:27:54.515766    7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:27:54.515808    7018 ssh_runner.go:195] Run: which cri-dockerd
	I0307 10:27:54.518180    7018 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 10:27:54.518271    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 10:27:54.524837    7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 10:27:54.535806    7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 10:27:54.546514    7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0307 10:27:54.546672    7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 10:27:54.546690    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:27:54.546786    7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:27:54.561856    7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
	I0307 10:27:54.561870    7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 10:27:54.561875    7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 10:27:54.561879    7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 10:27:54.561885    7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 10:27:54.561889    7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 10:27:54.561893    7018 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 10:27:54.561898    7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 10:27:54.561902    7018 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0307 10:27:54.561906    7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:27:54.561912    7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0307 10:27:54.562858    7018 docker.go:630] Got preloaded images: -- stdout --
	kindest/kindnetd:v20230227-15197099
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0307 10:27:54.562875    7018 docker.go:560] Images already preloaded, skipping extraction
	I0307 10:27:54.562881    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:27:54.562957    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:27:54.574839    7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:27:54.574851    7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:27:54.575174    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 10:27:54.582305    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:27:54.589279    7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:27:54.589317    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:27:54.596289    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:27:54.603219    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:27:54.610180    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:27:54.617267    7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:27:54.624610    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:27:54.631553    7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:27:54.637786    7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 10:27:54.637952    7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:27:54.644168    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:27:54.724435    7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:27:54.736384    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:27:54.736451    7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:27:54.745963    7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0307 10:27:54.745979    7018 command_runner.go:130] > [Unit]
	I0307 10:27:54.745984    7018 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 10:27:54.745988    7018 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 10:27:54.745993    7018 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0307 10:27:54.745999    7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0307 10:27:54.746004    7018 command_runner.go:130] > StartLimitBurst=3
	I0307 10:27:54.746007    7018 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 10:27:54.746011    7018 command_runner.go:130] > [Service]
	I0307 10:27:54.746014    7018 command_runner.go:130] > Type=notify
	I0307 10:27:54.746017    7018 command_runner.go:130] > Restart=on-failure
	I0307 10:27:54.746024    7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 10:27:54.746040    7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 10:27:54.746047    7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 10:27:54.746053    7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 10:27:54.746068    7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 10:27:54.746075    7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 10:27:54.746081    7018 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 10:27:54.746090    7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 10:27:54.746099    7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 10:27:54.746104    7018 command_runner.go:130] > ExecStart=
	I0307 10:27:54.746114    7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0307 10:27:54.746119    7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 10:27:54.746130    7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 10:27:54.746136    7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 10:27:54.746140    7018 command_runner.go:130] > LimitNOFILE=infinity
	I0307 10:27:54.746143    7018 command_runner.go:130] > LimitNPROC=infinity
	I0307 10:27:54.746147    7018 command_runner.go:130] > LimitCORE=infinity
	I0307 10:27:54.746156    7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 10:27:54.746161    7018 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 10:27:54.746165    7018 command_runner.go:130] > TasksMax=infinity
	I0307 10:27:54.746168    7018 command_runner.go:130] > TimeoutStartSec=0
	I0307 10:27:54.746173    7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 10:27:54.746179    7018 command_runner.go:130] > Delegate=yes
	I0307 10:27:54.746184    7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 10:27:54.746188    7018 command_runner.go:130] > KillMode=process
	I0307 10:27:54.746191    7018 command_runner.go:130] > [Install]
	I0307 10:27:54.746201    7018 command_runner.go:130] > WantedBy=multi-user.target
	I0307 10:27:54.746263    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:27:54.754873    7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:27:54.766931    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:27:54.775320    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:27:54.784274    7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 10:27:54.810077    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:27:54.819002    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:27:54.830417    7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:27:54.830427    7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:27:54.830775    7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:27:54.910530    7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:27:54.991106    7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:27:54.991125    7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 10:27:55.002612    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:27:55.082706    7018 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:27:56.344251    7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.261521172s)
	I0307 10:27:56.344319    7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:27:56.427984    7018 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 10:27:56.518324    7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:27:56.611821    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:27:56.699165    7018 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 10:27:56.710403    7018 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 10:27:56.710477    7018 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 10:27:56.714055    7018 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0307 10:27:56.714067    7018 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0307 10:27:56.714072    7018 command_runner.go:130] > Device: 16h/22d	Inode: 853         Links: 1
	I0307 10:27:56.714079    7018 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0307 10:27:56.714098    7018 command_runner.go:130] > Access: 2023-03-07 18:27:56.836416904 +0000
	I0307 10:27:56.714105    7018 command_runner.go:130] > Modify: 2023-03-07 18:27:56.836416904 +0000
	I0307 10:27:56.714109    7018 command_runner.go:130] > Change: 2023-03-07 18:27:56.838416903 +0000
	I0307 10:27:56.714113    7018 command_runner.go:130] >  Birth: -
	I0307 10:27:56.714136    7018 start.go:553] Will wait 60s for crictl version
	I0307 10:27:56.714180    7018 ssh_runner.go:195] Run: which crictl
	I0307 10:27:56.716256    7018 command_runner.go:130] > /usr/bin/crictl
	I0307 10:27:56.716479    7018 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 10:27:56.782605    7018 command_runner.go:130] > Version:  0.1.0
	I0307 10:27:56.782630    7018 command_runner.go:130] > RuntimeName:  docker
	I0307 10:27:56.782659    7018 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0307 10:27:56.782788    7018 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0307 10:27:56.786182    7018 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0307 10:27:56.786249    7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:27:56.806368    7018 command_runner.go:130] > 20.10.23
	I0307 10:27:56.807205    7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:27:56.827016    7018 command_runner.go:130] > 20.10.23
	I0307 10:27:56.870119    7018 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
	I0307 10:27:56.870166    7018 main.go:141] libmachine: (multinode-260000) Calling .GetIP
	I0307 10:27:56.870574    7018 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0307 10:27:56.874782    7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:27:56.882699    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:27:56.882759    7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:27:56.898148    7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
	I0307 10:27:56.898160    7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 10:27:56.898164    7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 10:27:56.898169    7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 10:27:56.898172    7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 10:27:56.898176    7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 10:27:56.898180    7018 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 10:27:56.898184    7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 10:27:56.898188    7018 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0307 10:27:56.898197    7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:27:56.898202    7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0307 10:27:56.898858    7018 docker.go:630] Got preloaded images: -- stdout --
	kindest/kindnetd:v20230227-15197099
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0307 10:27:56.898867    7018 docker.go:560] Images already preloaded, skipping extraction
	I0307 10:27:56.898945    7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:27:56.913839    7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
	I0307 10:27:56.913851    7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 10:27:56.913855    7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 10:27:56.913869    7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 10:27:56.913873    7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 10:27:56.913877    7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 10:27:56.913881    7018 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 10:27:56.913885    7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 10:27:56.913889    7018 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0307 10:27:56.913893    7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:27:56.913900    7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0307 10:27:56.914547    7018 docker.go:630] Got preloaded images: -- stdout --
	kindest/kindnetd:v20230227-15197099
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0307 10:27:56.914562    7018 cache_images.go:84] Images are preloaded, skipping loading
	I0307 10:27:56.914636    7018 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 10:27:56.935563    7018 command_runner.go:130] > cgroupfs
	I0307 10:27:56.936272    7018 cni.go:84] Creating CNI manager for ""
	I0307 10:27:56.936282    7018 cni.go:136] 3 nodes found, recommending kindnet
	I0307 10:27:56.936296    7018 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0307 10:27:56.936310    7018 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.12 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-260000 NodeName:multinode-260000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0307 10:27:56.936405    7018 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-260000"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 10:27:56.936460    7018 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-260000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0307 10:27:56.936536    7018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0307 10:27:56.943109    7018 command_runner.go:130] > kubeadm
	I0307 10:27:56.943116    7018 command_runner.go:130] > kubectl
	I0307 10:27:56.943120    7018 command_runner.go:130] > kubelet
	I0307 10:27:56.943263    7018 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 10:27:56.943308    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 10:27:56.949592    7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (449 bytes)
	I0307 10:27:56.960366    7018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 10:27:56.970938    7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I0307 10:27:56.982338    7018 ssh_runner.go:195] Run: grep 192.168.64.12	control-plane.minikube.internal$ /etc/hosts
	I0307 10:27:56.984586    7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:27:56.991939    7018 certs.go:56] Setting up /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000 for IP: 192.168.64.12
	I0307 10:27:56.991953    7018 certs.go:186] acquiring lock for shared ca certs: {Name:mk21aa92235e3b083ba3cf4a52527e5734aca22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:27:56.992091    7018 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key
	I0307 10:27:56.992154    7018 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key
	I0307 10:27:56.992245    7018 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key
	I0307 10:27:56.992309    7018 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key.546ed142
	I0307 10:27:56.992376    7018 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key
	I0307 10:27:56.992385    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 10:27:56.992414    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 10:27:56.992439    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 10:27:56.992461    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 10:27:56.992479    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 10:27:56.992497    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0307 10:27:56.992518    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 10:27:56.992536    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 10:27:56.992623    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem (1338 bytes)
	W0307 10:27:56.992661    7018 certs.go:397] ignoring /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903_empty.pem, impossibly tiny 0 bytes
	I0307 10:27:56.992672    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 10:27:56.992706    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem (1082 bytes)
	I0307 10:27:56.992736    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem (1123 bytes)
	I0307 10:27:56.992769    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem (1675 bytes)
	I0307 10:27:56.992838    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:27:56.992873    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:27:56.992892    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem -> /usr/share/ca-certificates/3903.pem
	I0307 10:27:56.992913    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /usr/share/ca-certificates/39032.pem
	I0307 10:27:56.993367    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0307 10:27:57.008967    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 10:27:57.024057    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 10:27:57.039253    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 10:27:57.054424    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 10:27:57.069714    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 10:27:57.085285    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 10:27:57.100487    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 10:27:57.116166    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 10:27:57.131487    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem --> /usr/share/ca-certificates/3903.pem (1338 bytes)
	I0307 10:27:57.146782    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /usr/share/ca-certificates/39032.pem (1708 bytes)
	I0307 10:27:57.161670    7018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 10:27:57.172684    7018 ssh_runner.go:195] Run: openssl version
	I0307 10:27:57.175822    7018 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0307 10:27:57.176031    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/39032.pem && ln -fs /usr/share/ca-certificates/39032.pem /etc/ssl/certs/39032.pem"
	I0307 10:27:57.182397    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/39032.pem
	I0307 10:27:57.185195    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 18:06 /usr/share/ca-certificates/39032.pem
	I0307 10:27:57.185263    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:06 /usr/share/ca-certificates/39032.pem
	I0307 10:27:57.185306    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/39032.pem
	I0307 10:27:57.188613    7018 command_runner.go:130] > 3ec20f2e
	I0307 10:27:57.188881    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/39032.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 10:27:57.195955    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 10:27:57.203206    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:27:57.205892    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:27:57.206086    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:27:57.206121    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:27:57.209355    7018 command_runner.go:130] > b5213941
	I0307 10:27:57.209587    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 10:27:57.216626    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3903.pem && ln -fs /usr/share/ca-certificates/3903.pem /etc/ssl/certs/3903.pem"
	I0307 10:27:57.223521    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3903.pem
	I0307 10:27:57.226194    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 18:06 /usr/share/ca-certificates/3903.pem
	I0307 10:27:57.226381    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:06 /usr/share/ca-certificates/3903.pem
	I0307 10:27:57.226417    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3903.pem
	I0307 10:27:57.229589    7018 command_runner.go:130] > 51391683
	I0307 10:27:57.229807    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3903.pem /etc/ssl/certs/51391683.0"
	I0307 10:27:57.236882    7018 kubeadm.go:401] StartCluster: {Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false in
gress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP:}
	I0307 10:27:57.236992    7018 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 10:27:57.252692    7018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 10:27:57.259210    7018 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0307 10:27:57.259222    7018 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0307 10:27:57.259230    7018 command_runner.go:130] > /var/lib/minikube/etcd:
	I0307 10:27:57.259234    7018 command_runner.go:130] > member
	I0307 10:27:57.259381    7018 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0307 10:27:57.259400    7018 kubeadm.go:633] restartCluster start
	I0307 10:27:57.259443    7018 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 10:27:57.266382    7018 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:57.266677    7018 kubeconfig.go:135] verify returned: extract IP: "multinode-260000" does not appear in /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:27:57.266753    7018 kubeconfig.go:146] "multinode-260000" context is missing from /Users/jenkins/minikube-integration/15985-3430/kubeconfig - will repair!
	I0307 10:27:57.266945    7018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15985-3430/kubeconfig: {Name:mkea569ea3041d84fd3aeaa788f308c9891aa7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:27:57.267393    7018 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:27:57.267600    7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:27:57.268098    7018 cert_rotation.go:137] Starting client certificate rotation controller
	I0307 10:27:57.268266    7018 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 10:27:57.274410    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:57.274450    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:57.282537    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:57.783579    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:57.783768    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:57.794313    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:58.283596    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:58.283730    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:58.294644    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:58.782684    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:58.782873    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:58.793430    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:59.283543    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:59.283649    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:59.294225    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:27:59.782887    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:27:59.783019    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:27:59.793607    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:00.282689    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:00.282922    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:00.292782    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:00.784107    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:00.784212    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:00.794376    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:01.283293    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:01.283433    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:01.293684    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:01.783681    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:01.783913    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:01.794869    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:02.283942    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:02.284074    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:02.294517    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:02.782945    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:02.783113    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:02.794006    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:03.284588    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:03.284777    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:03.294981    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:03.783910    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:03.784171    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:03.795492    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:04.283913    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:04.284104    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:04.294550    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:04.784723    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:04.784921    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:04.795506    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:05.284742    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:05.284884    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:05.294924    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:05.784725    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:05.784834    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:05.795470    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:06.284719    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:06.284873    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:06.295722    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:06.784533    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:06.784754    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:06.795131    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:07.284699    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:07.287011    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:07.296334    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:07.296343    7018 api_server.go:165] Checking apiserver status ...
	I0307 10:28:07.296382    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0307 10:28:07.304816    7018 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0307 10:28:07.304829    7018 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0307 10:28:07.304833    7018 kubeadm.go:1120] stopping kube-system containers ...
	I0307 10:28:07.304891    7018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 10:28:07.321379    7018 command_runner.go:130] > da06b08e5617
	I0307 10:28:07.321390    7018 command_runner.go:130] > c4559ff3518d
	I0307 10:28:07.321394    7018 command_runner.go:130] > 5b66601ca9d1
	I0307 10:28:07.321398    7018 command_runner.go:130] > 0ace7c6cf637
	I0307 10:28:07.321401    7018 command_runner.go:130] > 37e6cf092e1c
	I0307 10:28:07.321411    7018 command_runner.go:130] > ae9d394ad7a7
	I0307 10:28:07.321416    7018 command_runner.go:130] > 808d83da8d84
	I0307 10:28:07.321423    7018 command_runner.go:130] > 1bf0ab9eb4c5
	I0307 10:28:07.321426    7018 command_runner.go:130] > 2243964fbc4d
	I0307 10:28:07.321432    7018 command_runner.go:130] > 3b27eb7db4c2
	I0307 10:28:07.321436    7018 command_runner.go:130] > 10d167b9d987
	I0307 10:28:07.321440    7018 command_runner.go:130] > 6ac51e9516a2
	I0307 10:28:07.321443    7018 command_runner.go:130] > 3e9b5dec9e21
	I0307 10:28:07.321448    7018 command_runner.go:130] > 0721a87b433b
	I0307 10:28:07.321452    7018 command_runner.go:130] > aef4edf5b492
	I0307 10:28:07.321456    7018 command_runner.go:130] > cfcf920b7378
	I0307 10:28:07.322130    7018 docker.go:456] Stopping containers: [da06b08e5617 c4559ff3518d 5b66601ca9d1 0ace7c6cf637 37e6cf092e1c ae9d394ad7a7 808d83da8d84 1bf0ab9eb4c5 2243964fbc4d 3b27eb7db4c2 10d167b9d987 6ac51e9516a2 3e9b5dec9e21 0721a87b433b aef4edf5b492 cfcf920b7378]
	I0307 10:28:07.322197    7018 ssh_runner.go:195] Run: docker stop da06b08e5617 c4559ff3518d 5b66601ca9d1 0ace7c6cf637 37e6cf092e1c ae9d394ad7a7 808d83da8d84 1bf0ab9eb4c5 2243964fbc4d 3b27eb7db4c2 10d167b9d987 6ac51e9516a2 3e9b5dec9e21 0721a87b433b aef4edf5b492 cfcf920b7378
	I0307 10:28:07.338863    7018 command_runner.go:130] > da06b08e5617
	I0307 10:28:07.338874    7018 command_runner.go:130] > c4559ff3518d
	I0307 10:28:07.339268    7018 command_runner.go:130] > 5b66601ca9d1
	I0307 10:28:07.339476    7018 command_runner.go:130] > 0ace7c6cf637
	I0307 10:28:07.339531    7018 command_runner.go:130] > 37e6cf092e1c
	I0307 10:28:07.339608    7018 command_runner.go:130] > ae9d394ad7a7
	I0307 10:28:07.339615    7018 command_runner.go:130] > 808d83da8d84
	I0307 10:28:07.339735    7018 command_runner.go:130] > 1bf0ab9eb4c5
	I0307 10:28:07.339806    7018 command_runner.go:130] > 2243964fbc4d
	I0307 10:28:07.339952    7018 command_runner.go:130] > 3b27eb7db4c2
	I0307 10:28:07.340042    7018 command_runner.go:130] > 10d167b9d987
	I0307 10:28:07.340172    7018 command_runner.go:130] > 6ac51e9516a2
	I0307 10:28:07.340231    7018 command_runner.go:130] > 3e9b5dec9e21
	I0307 10:28:07.340237    7018 command_runner.go:130] > 0721a87b433b
	I0307 10:28:07.340416    7018 command_runner.go:130] > aef4edf5b492
	I0307 10:28:07.340541    7018 command_runner.go:130] > cfcf920b7378
	I0307 10:28:07.341444    7018 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 10:28:07.352567    7018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 10:28:07.358762    7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0307 10:28:07.358772    7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0307 10:28:07.358778    7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0307 10:28:07.358784    7018 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 10:28:07.358923    7018 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 10:28:07.358971    7018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 10:28:07.365297    7018 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0307 10:28:07.365309    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:07.435009    7018 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 10:28:07.435021    7018 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0307 10:28:07.435026    7018 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0307 10:28:07.435249    7018 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 10:28:07.435474    7018 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0307 10:28:07.435692    7018 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0307 10:28:07.436004    7018 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0307 10:28:07.436233    7018 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0307 10:28:07.436509    7018 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0307 10:28:07.436724    7018 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 10:28:07.436961    7018 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 10:28:07.437121    7018 command_runner.go:130] > [certs] Using the existing "sa" key
	I0307 10:28:07.438004    7018 command_runner.go:130] ! W0307 18:28:07.567847    1206 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:07.438020    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:07.477158    7018 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 10:28:07.530979    7018 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 10:28:07.671495    7018 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 10:28:07.806243    7018 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 10:28:08.012059    7018 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 10:28:08.013940    7018 command_runner.go:130] ! W0307 18:28:07.610432    1212 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:08.013962    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:08.064445    7018 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 10:28:08.064458    7018 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 10:28:08.064462    7018 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0307 10:28:08.158176    7018 command_runner.go:130] ! W0307 18:28:08.188188    1218 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:08.158212    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:08.205939    7018 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 10:28:08.205952    7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 10:28:08.207362    7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 10:28:08.208239    7018 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 10:28:08.211123    7018 command_runner.go:130] ! W0307 18:28:08.337529    1240 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:08.211182    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:08.268874    7018 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 10:28:08.276469    7018 command_runner.go:130] ! W0307 18:28:08.400815    1250 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:08.276569    7018 api_server.go:51] waiting for apiserver process to appear ...
	I0307 10:28:08.276628    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:08.791796    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:09.291418    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:09.790079    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:10.289945    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:10.300303    7018 command_runner.go:130] > 1604
	I0307 10:28:10.300322    7018 api_server.go:71] duration metric: took 2.023748028s to wait for apiserver process to appear ...
	I0307 10:28:10.300332    7018 api_server.go:87] waiting for apiserver healthz status ...
	I0307 10:28:10.300340    7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:28:13.002874    7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0307 10:28:13.002891    7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0307 10:28:13.505043    7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:28:13.511549    7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0307 10:28:13.511564    7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0307 10:28:14.003030    7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:28:14.007459    7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0307 10:28:14.007479    7018 api_server.go:102] status: https://192.168.64.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0307 10:28:14.504449    7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:28:14.508376    7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0307 10:28:14.508433    7018 round_trippers.go:463] GET https://192.168.64.12:8443/version
	I0307 10:28:14.508438    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:14.508446    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:14.508452    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:14.516136    7018 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 10:28:14.516148    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:14.516154    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:14.516158    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:14.516163    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:14.516168    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:14.516173    7018 round_trippers.go:580]     Content-Length: 263
	I0307 10:28:14.516178    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:14 GMT
	I0307 10:28:14.516185    7018 round_trippers.go:580]     Audit-Id: 364007ce-aca2-49dd-9978-704f40503cf3
	I0307 10:28:14.516202    7018 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.2",
	  "gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
	  "gitTreeState": "clean",
	  "buildDate": "2023-02-22T13:32:22Z",
	  "goVersion": "go1.19.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0307 10:28:14.516246    7018 api_server.go:140] control plane version: v1.26.2
	I0307 10:28:14.516254    7018 api_server.go:130] duration metric: took 4.215899257s to wait for apiserver health ...
	I0307 10:28:14.516265    7018 cni.go:84] Creating CNI manager for ""
	I0307 10:28:14.516271    7018 cni.go:136] 3 nodes found, recommending kindnet
	I0307 10:28:14.538513    7018 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 10:28:14.558703    7018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 10:28:14.565010    7018 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0307 10:28:14.565023    7018 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0307 10:28:14.565030    7018 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0307 10:28:14.565035    7018 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 10:28:14.565040    7018 command_runner.go:130] > Access: 2023-03-07 18:27:25.800133630 +0000
	I0307 10:28:14.565044    7018 command_runner.go:130] > Modify: 2023-02-24 23:58:49.000000000 +0000
	I0307 10:28:14.565049    7018 command_runner.go:130] > Change: 2023-03-07 18:27:24.520133706 +0000
	I0307 10:28:14.565052    7018 command_runner.go:130] >  Birth: -
	I0307 10:28:14.565080    7018 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0307 10:28:14.565086    7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0307 10:28:14.614484    7018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 10:28:15.463255    7018 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0307 10:28:15.465520    7018 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0307 10:28:15.467209    7018 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0307 10:28:15.486465    7018 command_runner.go:130] > daemonset.apps/kindnet configured
	I0307 10:28:15.487964    7018 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 10:28:15.488018    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:15.488023    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.488030    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.488035    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.490928    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:15.490936    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.490945    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.490952    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.490959    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.490966    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.490971    7018 round_trippers.go:580]     Audit-Id: fbf2e35b-55b7-466f-9275-31e56ce04183
	I0307 10:28:15.490978    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.492557    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1032"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81648 chars]
	I0307 10:28:15.495381    7018 system_pods.go:59] 12 kube-system pods found
	I0307 10:28:15.495395    7018 system_pods.go:61] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
	I0307 10:28:15.495400    7018 system_pods.go:61] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
	I0307 10:28:15.495403    7018 system_pods.go:61] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
	I0307 10:28:15.495407    7018 system_pods.go:61] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
	I0307 10:28:15.495411    7018 system_pods.go:61] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
	I0307 10:28:15.495415    7018 system_pods.go:61] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
	I0307 10:28:15.495421    7018 system_pods.go:61] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0307 10:28:15.495425    7018 system_pods.go:61] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
	I0307 10:28:15.495429    7018 system_pods.go:61] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
	I0307 10:28:15.495433    7018 system_pods.go:61] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
	I0307 10:28:15.495437    7018 system_pods.go:61] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
	I0307 10:28:15.495441    7018 system_pods.go:61] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
	I0307 10:28:15.495444    7018 system_pods.go:74] duration metric: took 7.473493ms to wait for pod list to return data ...
	I0307 10:28:15.495451    7018 node_conditions.go:102] verifying NodePressure condition ...
	I0307 10:28:15.495484    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
	I0307 10:28:15.495488    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.495494    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.495499    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.497193    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.497203    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.497209    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.497215    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.497225    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.497237    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.497246    7018 round_trippers.go:580]     Audit-Id: 87494186-1238-43d5-866d-3fb8cf3ac670
	I0307 10:28:15.497252    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.497439    7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1032"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16457 chars]
	I0307 10:28:15.497964    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:15.497980    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:15.497991    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:15.497994    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:15.497998    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:15.498001    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:15.498005    7018 node_conditions.go:105] duration metric: took 2.549988ms to run NodePressure ...
	I0307 10:28:15.498014    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 10:28:15.613921    7018 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0307 10:28:15.647095    7018 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0307 10:28:15.648104    7018 command_runner.go:130] ! W0307 18:28:15.688091    2114 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:15.648194    7018 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0307 10:28:15.648246    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0307 10:28:15.648251    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.648257    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.648262    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.650635    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:15.650643    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.650648    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.650653    7018 round_trippers.go:580]     Audit-Id: cb509b59-97eb-4381-8070-69cc8abdab39
	I0307 10:28:15.650664    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.650670    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.650675    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.650683    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.651119    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1034"},"items":[{"metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"288","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28366 chars]
	I0307 10:28:15.651785    7018 kubeadm.go:784] kubelet initialised
	I0307 10:28:15.651796    7018 kubeadm.go:785] duration metric: took 3.59091ms waiting for restarted kubelet to initialise ...
	I0307 10:28:15.651802    7018 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:15.651829    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:15.651834    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.651840    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.651856    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.654797    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:15.654807    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.654812    7018 round_trippers.go:580]     Audit-Id: a9d90e98-0ed7-4ce3-b64a-cc82a3347b6f
	I0307 10:28:15.654817    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.654823    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.654828    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.654832    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.654837    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.656020    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1034"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81648 chars]
	I0307 10:28:15.657761    7018 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:15.657793    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:15.657798    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.657805    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.657811    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.659065    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.659077    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.659085    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.659092    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.659098    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.659104    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.659109    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.659115    7018 round_trippers.go:580]     Audit-Id: eb2db07a-7079-4adb-a12f-c3919e2af0f0
	I0307 10:28:15.659276    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"402","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6281 chars]
	I0307 10:28:15.659508    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:15.659514    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.659520    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.659526    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.660689    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.660696    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.660701    7018 round_trippers.go:580]     Audit-Id: 4dd3efdc-1609-4f2d-9ae0-4a842093d527
	I0307 10:28:15.660706    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.660711    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.660717    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.660724    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.660734    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.660828    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:15.660996    7018 pod_ready.go:97] node "multinode-260000" hosting pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.661003    7018 pod_ready.go:81] duration metric: took 3.233228ms waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:15.661009    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.661014    7018 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:15.661036    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
	I0307 10:28:15.661040    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.661046    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.661051    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.662218    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.662226    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.662232    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.662238    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.662244    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.662249    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.662254    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.662258    7018 round_trippers.go:580]     Audit-Id: eeb6ea95-4efc-44d3-86d7-f3e9abc4f441
	I0307 10:28:15.662373    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"288","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5846 chars]
	I0307 10:28:15.662566    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:15.662572    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.662578    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.662586    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.663695    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.663702    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.663708    7018 round_trippers.go:580]     Audit-Id: 0c08723d-f6d6-4c3f-bc19-ce14073bddc8
	I0307 10:28:15.663713    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.663718    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.663724    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.663728    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.663733    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.663841    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:15.664005    7018 pod_ready.go:97] node "multinode-260000" hosting pod "etcd-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.664012    7018 pod_ready.go:81] duration metric: took 2.993408ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:15.664024    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "etcd-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.664031    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:15.664054    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
	I0307 10:28:15.664059    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.664064    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.664070    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.665133    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.665140    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.665145    7018 round_trippers.go:580]     Audit-Id: d8155bb7-ed68-40c6-a807-4b433cb29ded
	I0307 10:28:15.665164    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.665181    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.665188    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.665193    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.665199    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.665314    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"269","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7383 chars]
	I0307 10:28:15.665528    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:15.665534    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.665540    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.665546    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.666728    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.666735    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.666743    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.666752    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.666761    7018 round_trippers.go:580]     Audit-Id: 90f98c95-77ef-4f41-8b0d-68655aa67aef
	I0307 10:28:15.666768    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.666773    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.666778    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.666842    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:15.667008    7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-apiserver-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.667016    7018 pod_ready.go:81] duration metric: took 2.97888ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:15.667021    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-apiserver-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.667025    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:15.688093    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
	I0307 10:28:15.688109    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.688116    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.688121    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.689605    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:15.689619    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.689626    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.689631    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.689636    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.689642    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:15 GMT
	I0307 10:28:15.689649    7018 round_trippers.go:580]     Audit-Id: 30247593-c3f9-4f0b-8ec3-84987c2d98e7
	I0307 10:28:15.689656    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.689775    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1031","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7421 chars]
	I0307 10:28:15.888328    7018 request.go:622] Waited for 198.258292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:15.888357    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:15.888362    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:15.888370    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:15.888378    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:15.890719    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:15.890732    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:15.890738    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:15.890742    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:15.890748    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:15.890753    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:16 GMT
	I0307 10:28:15.890757    7018 round_trippers.go:580]     Audit-Id: 2c7858e8-abf5-4b14-91d6-55537d022b63
	I0307 10:28:15.890762    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:15.890832    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:15.891019    7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-controller-manager-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.891027    7018 pod_ready.go:81] duration metric: took 223.996649ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:15.891033    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-controller-manager-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:15.891041    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:16.088078    7018 request.go:622] Waited for 197.006181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
	I0307 10:28:16.088110    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
	I0307 10:28:16.088145    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:16.088152    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:16.088171    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:16.090139    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:16.090148    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:16.090153    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:16 GMT
	I0307 10:28:16.090158    7018 round_trippers.go:580]     Audit-Id: 33bdce0d-afd5-41b3-be54-1778f67df277
	I0307 10:28:16.090163    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:16.090168    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:16.090174    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:16.090180    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:16.090265    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"359","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0307 10:28:16.289549    7018 request.go:622] Waited for 199.030503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:16.289608    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:16.289613    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:16.289619    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:16.289625    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:16.291464    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:16.291474    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:16.291480    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:16.291486    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:16.291491    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:16.291497    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:16 GMT
	I0307 10:28:16.291502    7018 round_trippers.go:580]     Audit-Id: 304d1604-8237-4817-97b8-2398828df2aa
	I0307 10:28:16.291512    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:16.291606    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:16.291814    7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-proxy-8qwhq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:16.291823    7018 pod_ready.go:81] duration metric: took 400.77463ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:16.291829    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-proxy-8qwhq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:16.291845    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:16.488974    7018 request.go:622] Waited for 197.089772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:16.489010    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:16.489014    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:16.489021    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:16.489028    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:16.490668    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:16.490678    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:16.490684    7018 round_trippers.go:580]     Audit-Id: f7cf2cf1-fe75-45fb-b387-3c47e4ca38bf
	I0307 10:28:16.490689    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:16.490695    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:16.490699    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:16.490705    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:16.490710    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:16 GMT
	I0307 10:28:16.490783    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"469","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0307 10:28:16.688164    7018 request.go:622] Waited for 197.086665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:16.688201    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:16.688207    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:16.688216    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:16.688224    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:16.690320    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:16.690331    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:16.690337    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:16.690347    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:16.690354    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:16.690360    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:16 GMT
	I0307 10:28:16.690365    7018 round_trippers.go:580]     Audit-Id: fafa8c79-056c-4482-a7d3-9af678647000
	I0307 10:28:16.690370    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:16.690435    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b","resourceVersion":"812","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4512 chars]
	I0307 10:28:16.690610    7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:16.690616    7018 pod_ready.go:81] duration metric: took 398.761593ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:16.690622    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:16.888997    7018 request.go:622] Waited for 198.34143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:16.889083    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:16.889091    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:16.889099    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:16.889107    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:16.890960    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:16.890976    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:16.890988    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:16.890997    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:16.891006    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:16.891013    7018 round_trippers.go:580]     Audit-Id: 2a6b83fb-355a-47d1-a5fb-041011c34ce5
	I0307 10:28:16.891021    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:16.891029    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:16.891126    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0307 10:28:17.089042    7018 request.go:622] Waited for 197.667165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:17.089099    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:17.089104    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:17.089110    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:17.089123    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:17.092228    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:17.092240    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:17.092249    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:17.092256    7018 round_trippers.go:580]     Audit-Id: 4d8ae72e-fdde-4d59-9a71-91d0c3ee68a0
	I0307 10:28:17.092264    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:17.092271    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:17.092276    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:17.092282    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:17.092354    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1019","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4450 chars]
	I0307 10:28:17.092536    7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:17.092542    7018 pod_ready.go:81] duration metric: took 401.914192ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:17.092550    7018 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:17.289090    7018 request.go:622] Waited for 196.506508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:17.289121    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:17.289126    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:17.289133    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:17.289140    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:17.290898    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:17.290909    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:17.290915    7018 round_trippers.go:580]     Audit-Id: 9fb63a2b-6315-4a56-8919-8e3ff05df64c
	I0307 10:28:17.290920    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:17.290926    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:17.290932    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:17.290936    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:17.290941    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:17.291122    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1035","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5133 chars]
	I0307 10:28:17.488710    7018 request.go:622] Waited for 197.357013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:17.488741    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:17.488773    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:17.488780    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:17.488786    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:17.492401    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:17.492411    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:17.492417    7018 round_trippers.go:580]     Audit-Id: 8a48812e-9efb-405d-92a7-d9eab408cfe7
	I0307 10:28:17.492429    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:17.492435    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:17.492439    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:17.492445    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:17.492449    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:17.492517    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:17.492711    7018 pod_ready.go:97] node "multinode-260000" hosting pod "kube-scheduler-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:17.492718    7018 pod_ready.go:81] duration metric: took 400.162814ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	E0307 10:28:17.492724    7018 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-260000" hosting pod "kube-scheduler-multinode-260000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-260000" has status "Ready":"False"
	I0307 10:28:17.492729    7018 pod_ready.go:38] duration metric: took 1.8409126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:17.492740    7018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 10:28:17.500400    7018 command_runner.go:130] > -16
	I0307 10:28:17.500574    7018 ops.go:34] apiserver oom_adj: -16
	I0307 10:28:17.500584    7018 kubeadm.go:637] restartCluster took 20.241085671s
	I0307 10:28:17.500589    7018 kubeadm.go:403] StartCluster complete in 20.26361982s
	I0307 10:28:17.500600    7018 settings.go:142] acquiring lock: {Name:mk4d055ee1d778ec2752c0ce26b6fb536462adb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:28:17.500678    7018 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:28:17.501023    7018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15985-3430/kubeconfig: {Name:mkea569ea3041d84fd3aeaa788f308c9891aa7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:28:17.501262    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 10:28:17.501294    7018 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0307 10:28:17.546290    7018 out.go:177] * Enabled addons: 
	I0307 10:28:17.501457    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:28:17.501669    7018 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:28:17.583590    7018 addons.go:499] enable addons completed in 82.276784ms: enabled=[]
	I0307 10:28:17.583795    7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:28:17.584004    7018 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 10:28:17.584011    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:17.584017    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:17.584022    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:17.585901    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:17.585911    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:17.585917    7018 round_trippers.go:580]     Audit-Id: 381c106f-61b9-4164-8d45-b690984d5352
	I0307 10:28:17.585927    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:17.585933    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:17.585937    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:17.585942    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:17.585947    7018 round_trippers.go:580]     Content-Length: 292
	I0307 10:28:17.585952    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:17.585965    7018 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9058bb7-5525-4245-a92a-3b0f0144c5d4","resourceVersion":"1033","creationTimestamp":"2023-03-07T18:18:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0307 10:28:17.586053    7018 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-260000" context rescaled to 1 replicas
	I0307 10:28:17.586069    7018 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:28:17.598551    7018 command_runner.go:130] > apiVersion: v1
	I0307 10:28:17.607409    7018 command_runner.go:130] > data:
	I0307 10:28:17.607416    7018 command_runner.go:130] >   Corefile: |
	I0307 10:28:17.607423    7018 command_runner.go:130] >     .:53 {
	I0307 10:28:17.607394    7018 out.go:177] * Verifying Kubernetes components...
	I0307 10:28:17.607432    7018 command_runner.go:130] >         log
	I0307 10:28:17.665368    7018 command_runner.go:130] >         errors
	I0307 10:28:17.665380    7018 command_runner.go:130] >         health {
	I0307 10:28:17.665387    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:28:17.665390    7018 command_runner.go:130] >            lameduck 5s
	I0307 10:28:17.665471    7018 command_runner.go:130] >         }
	I0307 10:28:17.665485    7018 command_runner.go:130] >         ready
	I0307 10:28:17.665501    7018 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0307 10:28:17.665515    7018 command_runner.go:130] >            pods insecure
	I0307 10:28:17.665530    7018 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0307 10:28:17.665540    7018 command_runner.go:130] >            ttl 30
	I0307 10:28:17.665547    7018 command_runner.go:130] >         }
	I0307 10:28:17.665555    7018 command_runner.go:130] >         prometheus :9153
	I0307 10:28:17.665561    7018 command_runner.go:130] >         hosts {
	I0307 10:28:17.665581    7018 command_runner.go:130] >            192.168.64.1 host.minikube.internal
	I0307 10:28:17.665589    7018 command_runner.go:130] >            fallthrough
	I0307 10:28:17.665596    7018 command_runner.go:130] >         }
	I0307 10:28:17.665604    7018 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0307 10:28:17.665613    7018 command_runner.go:130] >            max_concurrent 1000
	I0307 10:28:17.665622    7018 command_runner.go:130] >         }
	I0307 10:28:17.665633    7018 command_runner.go:130] >         cache 30
	I0307 10:28:17.665648    7018 command_runner.go:130] >         loop
	I0307 10:28:17.665659    7018 command_runner.go:130] >         reload
	I0307 10:28:17.665673    7018 command_runner.go:130] >         loadbalance
	I0307 10:28:17.665700    7018 command_runner.go:130] >     }
	I0307 10:28:17.665714    7018 command_runner.go:130] > kind: ConfigMap
	I0307 10:28:17.665724    7018 command_runner.go:130] > metadata:
	I0307 10:28:17.665738    7018 command_runner.go:130] >   creationTimestamp: "2023-03-07T18:18:28Z"
	I0307 10:28:17.665750    7018 command_runner.go:130] >   name: coredns
	I0307 10:28:17.665761    7018 command_runner.go:130] >   namespace: kube-system
	I0307 10:28:17.665769    7018 command_runner.go:130] >   resourceVersion: "361"
	I0307 10:28:17.665778    7018 command_runner.go:130] >   uid: ab4f9271-2ad1-469a-9991-ac0e7cd4eee1
	I0307 10:28:17.665875    7018 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0307 10:28:17.677281    7018 node_ready.go:35] waiting up to 6m0s for node "multinode-260000" to be "Ready" ...
	I0307 10:28:17.688141    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:17.688153    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:17.688160    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:17.688165    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:17.699560    7018 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0307 10:28:17.699573    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:17.699579    7018 round_trippers.go:580]     Audit-Id: b0a8d418-5306-402d-aafe-b01480d098d1
	I0307 10:28:17.699584    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:17.699588    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:17.699594    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:17.699602    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:17.699607    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:17 GMT
	I0307 10:28:17.699666    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:18.201280    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:18.201301    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:18.201313    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:18.201324    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:18.205520    7018 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 10:28:18.205536    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:18.205545    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:18 GMT
	I0307 10:28:18.205551    7018 round_trippers.go:580]     Audit-Id: 93568139-27e9-412b-aabc-a063cf381701
	I0307 10:28:18.205556    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:18.205560    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:18.205566    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:18.205571    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:18.205679    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:18.700510    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:18.700532    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:18.700545    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:18.700556    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:18.703654    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:18.703670    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:18.703678    7018 round_trippers.go:580]     Audit-Id: fe05d8ff-851d-43ec-87d1-ea8137b7dbe8
	I0307 10:28:18.703684    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:18.703691    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:18.703714    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:18.703725    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:18.703732    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:18 GMT
	I0307 10:28:18.703813    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:19.202177    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:19.202200    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:19.202214    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:19.202227    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:19.205274    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:19.205290    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:19.205298    7018 round_trippers.go:580]     Audit-Id: 01e6aee3-dfa5-4ab3-b092-2707828ba795
	I0307 10:28:19.205331    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:19.205342    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:19.205349    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:19.205357    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:19.205364    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:19 GMT
	I0307 10:28:19.205470    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:19.700708    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:19.700729    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:19.700741    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:19.700751    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:19.703406    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:19.703422    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:19.703431    7018 round_trippers.go:580]     Audit-Id: 3a975007-4ad9-4952-af4f-5375799e6a1a
	I0307 10:28:19.703439    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:19.703445    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:19.703452    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:19.703458    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:19.703466    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:19 GMT
	I0307 10:28:19.703543    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:19.703788    7018 node_ready.go:58] node "multinode-260000" has status "Ready":"False"
	I0307 10:28:20.200489    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:20.200509    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:20.200521    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:20.200531    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:20.203162    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:20.203178    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:20.203186    7018 round_trippers.go:580]     Audit-Id: a8a0b987-0c00-4eb2-84cc-bb8ba63cb67a
	I0307 10:28:20.203193    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:20.203202    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:20.203212    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:20.203220    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:20.203228    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:20 GMT
	I0307 10:28:20.203489    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:20.700672    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:20.700696    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:20.700709    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:20.700725    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:20.703549    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:20.703565    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:20.703573    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:20.703580    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:20.703586    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:20.703593    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:20 GMT
	I0307 10:28:20.703599    7018 round_trippers.go:580]     Audit-Id: efe8aac9-6cb0-4496-83f5-15dd81197a83
	I0307 10:28:20.703607    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:20.703677    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:21.201352    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:21.201373    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:21.201385    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:21.201395    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:21.204173    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:21.204190    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:21.204197    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:21 GMT
	I0307 10:28:21.204205    7018 round_trippers.go:580]     Audit-Id: be92e2ce-4712-4f1e-861a-703e11d6cba4
	I0307 10:28:21.204220    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:21.204229    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:21.204235    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:21.204243    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:21.204341    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:21.700804    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:21.700827    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:21.700840    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:21.700851    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:21.703563    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:21.703580    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:21.703588    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:21.703595    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:21.703602    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:21 GMT
	I0307 10:28:21.703609    7018 round_trippers.go:580]     Audit-Id: d76a302b-b114-4fb6-a945-db5c79d73c04
	I0307 10:28:21.703616    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:21.703622    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:21.703693    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:21.703979    7018 node_ready.go:58] node "multinode-260000" has status "Ready":"False"
	I0307 10:28:22.200196    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:22.200216    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:22.200229    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:22.200239    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:22.202586    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:22.202599    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:22.202606    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:22.202614    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:22 GMT
	I0307 10:28:22.202622    7018 round_trippers.go:580]     Audit-Id: 4ff0cc55-c046-416f-9185-daae0bebce4a
	I0307 10:28:22.202632    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:22.202639    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:22.202696    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:22.202811    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:22.700709    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:22.700730    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:22.700742    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:22.700752    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:22.702936    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:22.723882    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:22.723896    7018 round_trippers.go:580]     Audit-Id: 29769d58-0043-4d39-82f0-cccd4df4015a
	I0307 10:28:22.723957    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:22.723969    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:22.723978    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:22.723988    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:22.723998    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:22 GMT
	I0307 10:28:22.724094    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:23.200620    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:23.200644    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:23.200657    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:23.200667    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:23.203465    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:23.203481    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:23.203489    7018 round_trippers.go:580]     Audit-Id: 9e76918b-04a7-460f-b7a3-1bb26e8c0971
	I0307 10:28:23.203496    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:23.203502    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:23.203510    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:23.203517    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:23.203523    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:23 GMT
	I0307 10:28:23.203617    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1028","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5457 chars]
	I0307 10:28:23.700169    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:23.700191    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:23.700203    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:23.700213    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:23.703029    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:23.703045    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:23.703053    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:23.703059    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:23.703067    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:23.703076    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:23.703088    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:23 GMT
	I0307 10:28:23.703098    7018 round_trippers.go:580]     Audit-Id: ef8f12d5-7107-46fa-a902-ce29a6cd21c5
	I0307 10:28:23.703227    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:23.703480    7018 node_ready.go:49] node "multinode-260000" has status "Ready":"True"
	I0307 10:28:23.703494    7018 node_ready.go:38] duration metric: took 6.026171359s waiting for node "multinode-260000" to be "Ready" ...
	I0307 10:28:23.703502    7018 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:23.703549    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:23.703555    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:23.703563    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:23.703572    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:23.705759    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:23.705769    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:23.705780    7018 round_trippers.go:580]     Audit-Id: 67287338-b563-4ece-963d-6a23473c12f5
	I0307 10:28:23.705788    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:23.705795    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:23.705804    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:23.705811    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:23.705818    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:23 GMT
	I0307 10:28:23.706556    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1094"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83638 chars]
	I0307 10:28:23.708320    7018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:23.708353    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:23.708358    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:23.708374    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:23.708381    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:23.709654    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:23.709668    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:23.709674    7018 round_trippers.go:580]     Audit-Id: 31e97546-40fd-4948-9b6f-419bdad39a05
	I0307 10:28:23.709680    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:23.709685    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:23.709690    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:23.709696    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:23.709701    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:23 GMT
	I0307 10:28:23.709974    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:23.710200    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:23.710205    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:23.710212    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:23.710218    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:23.711266    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:23.711276    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:23.711284    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:23.711291    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:23.711299    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:23.711307    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:23.711316    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:23 GMT
	I0307 10:28:23.711324    7018 round_trippers.go:580]     Audit-Id: ef253b5e-8ae9-4c22-97b4-635ece1c07f1
	I0307 10:28:23.711443    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:24.211832    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:24.211854    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:24.211868    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:24.211879    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:24.214134    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:24.214147    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:24.214155    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:24.214161    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:24.214169    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:24 GMT
	I0307 10:28:24.214176    7018 round_trippers.go:580]     Audit-Id: 7cceac8c-72f2-43b3-a70c-da8298a351ea
	I0307 10:28:24.214183    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:24.214189    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:24.214267    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:24.214622    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:24.214631    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:24.214639    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:24.214647    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:24.216139    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:24.216148    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:24.216154    7018 round_trippers.go:580]     Audit-Id: 651af490-ed9e-4eba-a495-32b2210d00c4
	I0307 10:28:24.216159    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:24.216167    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:24.216176    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:24.216187    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:24.216193    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:24 GMT
	I0307 10:28:24.216294    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:24.712583    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:24.712604    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:24.712617    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:24.712627    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:24.715128    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:24.715141    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:24.715151    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:24.715174    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:24.715202    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:24.715215    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:24 GMT
	I0307 10:28:24.715229    7018 round_trippers.go:580]     Audit-Id: 64f8c7b5-e206-4888-b04e-57f95c098459
	I0307 10:28:24.715263    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:24.715362    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:24.715724    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:24.715733    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:24.715741    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:24.715748    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:24.717117    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:24.717131    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:24.717139    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:24.717149    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:24.717158    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:24 GMT
	I0307 10:28:24.717165    7018 round_trippers.go:580]     Audit-Id: 39facfb8-6882-4093-a54a-be9e41cdcd8a
	I0307 10:28:24.717189    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:24.717203    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:24.717297    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:25.211941    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:25.211961    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:25.211973    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:25.211984    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:25.214996    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:25.215012    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:25.215056    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:25.215076    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:25.215089    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:25.215121    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:25 GMT
	I0307 10:28:25.215133    7018 round_trippers.go:580]     Audit-Id: eab464a3-fd8c-4abd-92da-a9e3fab09b87
	I0307 10:28:25.215153    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:25.215232    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:25.215588    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:25.215596    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:25.215604    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:25.215611    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:25.216989    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:25.217000    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:25.217005    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:25.217010    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:25 GMT
	I0307 10:28:25.217021    7018 round_trippers.go:580]     Audit-Id: 1b48fc62-d0ae-42f1-a567-d263b0778b46
	I0307 10:28:25.217026    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:25.217031    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:25.217038    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:25.217228    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:25.713156    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:25.713175    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:25.713187    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:25.713197    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:25.715881    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:25.715901    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:25.715913    7018 round_trippers.go:580]     Audit-Id: b458a53f-cebf-4dba-b1b0-795a83b24bef
	I0307 10:28:25.715924    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:25.715933    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:25.715939    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:25.715946    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:25.715956    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:25 GMT
	I0307 10:28:25.716134    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:25.716499    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:25.716508    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:25.716516    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:25.716523    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:25.717669    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:25.717677    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:25.717683    7018 round_trippers.go:580]     Audit-Id: 1eb8ab80-758c-4e81-8dcb-159f98be89b6
	I0307 10:28:25.717691    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:25.717698    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:25.717705    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:25.717711    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:25.717717    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:25 GMT
	I0307 10:28:25.717847    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:25.718043    7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
	I0307 10:28:26.211810    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:26.211826    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:26.211833    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:26.211854    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:26.217580    7018 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 10:28:26.217593    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:26.217599    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:26.217624    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:26 GMT
	I0307 10:28:26.217634    7018 round_trippers.go:580]     Audit-Id: 25844fb6-cd84-4dd3-af18-9f89ee6d5a04
	I0307 10:28:26.217641    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:26.217646    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:26.217651    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:26.218222    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:26.218502    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:26.218509    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:26.218515    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:26.218520    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:26.223546    7018 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 10:28:26.223558    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:26.223563    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:26 GMT
	I0307 10:28:26.223568    7018 round_trippers.go:580]     Audit-Id: bf250b8a-6074-45b3-9f33-45ad42a6a343
	I0307 10:28:26.223573    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:26.223578    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:26.223582    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:26.223587    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:26.224042    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:26.713218    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:26.713243    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:26.713255    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:26.713265    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:26.716102    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:26.716121    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:26.716129    7018 round_trippers.go:580]     Audit-Id: 219d5f63-3a7c-44c7-8b51-2921f95c2710
	I0307 10:28:26.716136    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:26.716144    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:26.716151    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:26.716157    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:26.716165    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:26 GMT
	I0307 10:28:26.716247    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:26.716596    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:26.716604    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:26.716612    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:26.716619    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:26.718244    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:26.718252    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:26.718258    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:26.718264    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:26.718274    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:26.718280    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:26.718288    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:26 GMT
	I0307 10:28:26.718293    7018 round_trippers.go:580]     Audit-Id: ad769d45-1dbe-4f0f-bad4-953da8623939
	I0307 10:28:26.718441    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:27.212704    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:27.212727    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:27.212739    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:27.212749    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:27.215311    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:27.215337    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:27.215345    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:27.215353    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:27.215361    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:27 GMT
	I0307 10:28:27.215367    7018 round_trippers.go:580]     Audit-Id: 36856e4f-a7e1-45d6-97ce-8f885ac8c841
	I0307 10:28:27.215374    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:27.215381    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:27.215565    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:27.215939    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:27.215948    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:27.215956    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:27.215964    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:27.217347    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:27.217354    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:27.217362    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:27.217368    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:27 GMT
	I0307 10:28:27.217374    7018 round_trippers.go:580]     Audit-Id: d6676113-bd9a-4eaf-ba1b-019818744e42
	I0307 10:28:27.217381    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:27.217389    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:27.217404    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:27.217556    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:27.711824    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:27.724865    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:27.724880    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:27.724887    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:27.726579    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:27.726589    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:27.726594    7018 round_trippers.go:580]     Audit-Id: 0d01fa41-8246-4722-9399-93a5592f6b29
	I0307 10:28:27.726599    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:27.726606    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:27.726613    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:27.726619    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:27.726624    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:27 GMT
	I0307 10:28:27.726876    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:27.727175    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:27.727181    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:27.727187    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:27.727192    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:27.728314    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:27.728322    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:27.728334    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:27.728347    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:27.728353    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:27.728370    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:27 GMT
	I0307 10:28:27.728379    7018 round_trippers.go:580]     Audit-Id: 0e3e9ef9-ecac-45df-aee2-aff56bc03a97
	I0307 10:28:27.728391    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:27.728478    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:27.728664    7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
	I0307 10:28:28.212950    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:28.212969    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:28.212982    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:28.212992    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:28.216019    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:28.216035    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:28.216043    7018 round_trippers.go:580]     Audit-Id: 24e3382f-877e-4bd3-9d01-53648e905133
	I0307 10:28:28.216051    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:28.216057    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:28.216064    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:28.216072    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:28.216078    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:28 GMT
	I0307 10:28:28.216218    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:28.216592    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:28.216601    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:28.216610    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:28.216617    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:28.218098    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:28.218109    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:28.218116    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:28 GMT
	I0307 10:28:28.218121    7018 round_trippers.go:580]     Audit-Id: ba13bf42-a23e-4b8b-b82d-f134c64fb02d
	I0307 10:28:28.218133    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:28.218139    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:28.218144    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:28.218149    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:28.218380    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:28.713844    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:28.713872    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:28.713886    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:28.713897    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:28.717059    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:28.717075    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:28.717082    7018 round_trippers.go:580]     Audit-Id: 2d17ebc7-34f0-4220-a01c-eba9dc18629b
	I0307 10:28:28.717089    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:28.717096    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:28.717102    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:28.717109    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:28.717115    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:28 GMT
	I0307 10:28:28.717206    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:28.717584    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:28.717593    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:28.717601    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:28.717609    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:28.718961    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:28.718971    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:28.718978    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:28.718982    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:28.718987    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:28.718992    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:28 GMT
	I0307 10:28:28.718997    7018 round_trippers.go:580]     Audit-Id: 1a95c19b-155c-4919-8f52-e4a21e53e43d
	I0307 10:28:28.719002    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:28.719162    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:29.212285    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:29.212298    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:29.212305    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:29.212310    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:29.214049    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:29.214059    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:29.214065    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:29.214070    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:29.214075    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:29.214080    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:29 GMT
	I0307 10:28:29.214087    7018 round_trippers.go:580]     Audit-Id: 5902e368-f17f-4c82-9c7c-675d086888dd
	I0307 10:28:29.214092    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:29.214228    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:29.214511    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:29.214517    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:29.214523    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:29.214529    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:29.215699    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:29.215709    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:29.215716    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:29.215723    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:29 GMT
	I0307 10:28:29.215729    7018 round_trippers.go:580]     Audit-Id: b6d6f5f7-09c3-4195-a4c1-845aef7ffc32
	I0307 10:28:29.215734    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:29.215740    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:29.215747    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:29.215925    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:29.713052    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:29.713064    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:29.713070    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:29.713076    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:29.714443    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:29.714452    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:29.714457    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:29.714463    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:29.714468    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:29.714479    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:29 GMT
	I0307 10:28:29.714484    7018 round_trippers.go:580]     Audit-Id: 9c79de10-38b6-4cc5-8a5c-f518875339a0
	I0307 10:28:29.714489    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:29.714549    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:29.714827    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:29.714833    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:29.714839    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:29.714844    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:29.723979    7018 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0307 10:28:29.723993    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:29.724011    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:29.724019    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:29.724028    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:29.724034    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:29 GMT
	I0307 10:28:29.724040    7018 round_trippers.go:580]     Audit-Id: 23a3f013-edd3-4bde-b9dc-3cdee57361b7
	I0307 10:28:29.724046    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:29.724143    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.211801    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:30.211812    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.211819    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.211824    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.213958    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:30.213967    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.213972    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.213979    7018 round_trippers.go:580]     Audit-Id: e3914bca-23b4-48cb-b3f3-c3e31ebe9b8e
	I0307 10:28:30.213984    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.213989    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.213994    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.213999    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.219685    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1049","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6541 chars]
	I0307 10:28:30.219986    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.219995    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.220004    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.220012    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.221717    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.221732    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.221741    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.221756    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.221762    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.221769    7018 round_trippers.go:580]     Audit-Id: f3b83e3d-bec0-444f-bd00-ec3be70f6d10
	I0307 10:28:30.221777    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.221783    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.221864    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.222060    7018 pod_ready.go:102] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"False"
	I0307 10:28:30.712597    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:30.712622    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.712717    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.712731    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.716221    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:30.716239    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.716247    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.716256    7018 round_trippers.go:580]     Audit-Id: c7b16bdb-1c9a-42a3-b989-2ef728451887
	I0307 10:28:30.716263    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.716270    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.716278    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.716284    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.716375    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6489 chars]
	I0307 10:28:30.716777    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.716785    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.716793    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.716801    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.718436    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.718450    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.718457    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.718466    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.718473    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.718480    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.718485    7018 round_trippers.go:580]     Audit-Id: 405256c2-a3b7-4450-9419-3e5f6172aabd
	I0307 10:28:30.718491    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.718618    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.718803    7018 pod_ready.go:92] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:30.718812    7018 pod_ready.go:81] duration metric: took 7.010451765s waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.718825    7018 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.718853    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
	I0307 10:28:30.719043    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.719125    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.719139    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.721072    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.721084    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.721090    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.721095    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.721100    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.721105    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.721110    7018 round_trippers.go:580]     Audit-Id: ea8580ee-1e6e-4f3b-8474-356c1d7d09d5
	I0307 10:28:30.721114    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.721227    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"1080","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6056 chars]
	I0307 10:28:30.721443    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.721450    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.721456    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.721461    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.722677    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.722687    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.722699    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.722710    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.722719    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.722725    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.722731    7018 round_trippers.go:580]     Audit-Id: 9a6b5445-3298-4c53-9f39-0cfd9f3d0951
	I0307 10:28:30.722738    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.722826    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.723009    7018 pod_ready.go:92] pod "etcd-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:30.723015    7018 pod_ready.go:81] duration metric: took 4.185851ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.723025    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.723049    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
	I0307 10:28:30.723053    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.723059    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.723068    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.725808    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:30.725819    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.725824    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.725830    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.725835    7018 round_trippers.go:580]     Audit-Id: 27751b68-dbeb-4139-b048-aa37ba96ce0d
	I0307 10:28:30.725840    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.725844    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.725850    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.725930    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"1143","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7591 chars]
	I0307 10:28:30.726162    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.726168    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.726173    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.726179    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.727114    7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 10:28:30.727123    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.727129    7018 round_trippers.go:580]     Audit-Id: 09ac9355-1c65-4420-8f52-155883618aa6
	I0307 10:28:30.727134    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.727140    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.727145    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.727150    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.727155    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.727288    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.727470    7018 pod_ready.go:92] pod "kube-apiserver-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:30.727476    7018 pod_ready.go:81] duration metric: took 4.446202ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.727481    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.727505    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
	I0307 10:28:30.727510    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.727516    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.727522    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.728648    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.728659    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.728665    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.728670    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.728674    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.728679    7018 round_trippers.go:580]     Audit-Id: 559a8b88-70d9-4098-a5fd-ce69e6fc06be
	I0307 10:28:30.728684    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.728688    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.728916    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1131","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7159 chars]
	I0307 10:28:30.729139    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.729145    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.729151    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.729157    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.730563    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:30.730570    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.730575    7018 round_trippers.go:580]     Audit-Id: 8efa58ee-7b42-4ba5-a878-ad10e7d3e33b
	I0307 10:28:30.730579    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.730584    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.730588    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.730593    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.730599    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.730701    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.730866    7018 pod_ready.go:92] pod "kube-controller-manager-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:30.730872    7018 pod_ready.go:81] duration metric: took 3.385852ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.730877    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.730902    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
	I0307 10:28:30.730906    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.730912    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.730918    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.731885    7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 10:28:30.731894    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.731900    7018 round_trippers.go:580]     Audit-Id: ffc44502-d870-437e-9544-bf450ca2b814
	I0307 10:28:30.731906    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.731914    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.731920    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.731925    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.731930    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.732036    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"1061","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0307 10:28:30.732243    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:30.732248    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.732255    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.732260    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.733218    7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 10:28:30.733226    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.733232    7018 round_trippers.go:580]     Audit-Id: 3937160f-ce1c-4927-8fe0-6e7893d1567c
	I0307 10:28:30.733237    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.733244    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.733248    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.733253    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.733258    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:30 GMT
	I0307 10:28:30.733356    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:30.733519    7018 pod_ready.go:92] pod "kube-proxy-8qwhq" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:30.733525    7018 pod_ready.go:81] duration metric: took 2.642988ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.733531    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:30.912636    7018 request.go:622] Waited for 179.066998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:30.912685    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:30.912694    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:30.912778    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:30.912791    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:30.915495    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:30.915507    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:30.915515    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:30.915522    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:30.915530    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:30.915536    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:30.915544    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:31 GMT
	I0307 10:28:30.915550    7018 round_trippers.go:580]     Audit-Id: 3ae79f8d-1535-4d8e-a180-5f18227960da
	I0307 10:28:30.915655    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"469","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0307 10:28:31.114599    7018 request.go:622] Waited for 198.634122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:31.114628    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:31.114633    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.114642    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.114649    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.116473    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:31.116483    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.116488    7018 round_trippers.go:580]     Audit-Id: e955a99c-57ac-4ae0-a513-9afa809a5caf
	I0307 10:28:31.116493    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.116498    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.116503    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.116509    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.116513    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:31 GMT
	I0307 10:28:31.116688    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b","resourceVersion":"812","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4512 chars]
	I0307 10:28:31.116864    7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:31.116870    7018 pod_ready.go:81] duration metric: took 383.333062ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:31.116876    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:31.314683    7018 request.go:622] Waited for 197.728848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:31.314736    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:31.314770    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.314788    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.314803    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.317976    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:31.317992    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.318000    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:31 GMT
	I0307 10:28:31.318029    7018 round_trippers.go:580]     Audit-Id: a357c92b-2320-4582-b9e7-f62d05a9d4e3
	I0307 10:28:31.318042    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.318051    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.318057    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.318064    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.318199    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0307 10:28:31.514054    7018 request.go:622] Waited for 195.505176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:31.514146    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:31.514242    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.514254    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.514267    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.517133    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:31.517148    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.517156    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.517163    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.517171    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.517178    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.517184    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:31 GMT
	I0307 10:28:31.517191    7018 round_trippers.go:580]     Audit-Id: 532579cf-d5cc-41c0-b38e-54a2f800d22f
	I0307 10:28:31.517302    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1109","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4330 chars]
	I0307 10:28:31.517527    7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:31.517534    7018 pod_ready.go:81] duration metric: took 400.651378ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:31.517542    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:31.713858    7018 request.go:622] Waited for 196.240525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:31.713912    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:31.713952    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.713969    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.713983    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.716855    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:31.716871    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.716879    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.716894    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:31 GMT
	I0307 10:28:31.716902    7018 round_trippers.go:580]     Audit-Id: 291b5d9b-3357-4be3-9d0c-89832cae8ad3
	I0307 10:28:31.716910    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.716917    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.716924    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.717008    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1139","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4889 chars]
	I0307 10:28:31.912715    7018 request.go:622] Waited for 195.420936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:31.912766    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:31.912775    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.912789    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.912852    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.915496    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:31.915515    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.915523    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:31.915532    7018 round_trippers.go:580]     Audit-Id: ab49a22e-b0ca-4460-8af6-f31980cc83e0
	I0307 10:28:31.915539    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.915547    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.915558    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.915565    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.915671    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:31.915930    7018 pod_ready.go:92] pod "kube-scheduler-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:31.915938    7018 pod_ready.go:81] duration metric: took 398.388063ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:31.915946    7018 pod_ready.go:38] duration metric: took 8.212399171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:31.915959    7018 api_server.go:51] waiting for apiserver process to appear ...
	I0307 10:28:31.916021    7018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:28:31.926000    7018 command_runner.go:130] > 1604
	I0307 10:28:31.926101    7018 api_server.go:71] duration metric: took 14.339953362s to wait for apiserver process to appear ...
	I0307 10:28:31.926109    7018 api_server.go:87] waiting for apiserver healthz status ...
	I0307 10:28:31.926115    7018 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:28:31.929766    7018 api_server.go:278] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0307 10:28:31.929791    7018 round_trippers.go:463] GET https://192.168.64.12:8443/version
	I0307 10:28:31.929796    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:31.929803    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:31.929809    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:31.930265    7018 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 10:28:31.930272    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:31.930277    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:31.930283    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:31.930291    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:31.930297    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:31.930302    7018 round_trippers.go:580]     Content-Length: 263
	I0307 10:28:31.930307    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:31.930313    7018 round_trippers.go:580]     Audit-Id: 416b7f0f-553f-48b8-8633-6be8897b3ddf
	I0307 10:28:31.930330    7018 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.2",
	  "gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
	  "gitTreeState": "clean",
	  "buildDate": "2023-02-22T13:32:22Z",
	  "goVersion": "go1.19.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0307 10:28:31.930354    7018 api_server.go:140] control plane version: v1.26.2
	I0307 10:28:31.930360    7018 api_server.go:130] duration metric: took 4.24718ms to wait for apiserver health ...
	I0307 10:28:31.930364    7018 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 10:28:32.112716    7018 request.go:622] Waited for 182.311615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:32.112771    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:32.112780    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:32.112834    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:32.112848    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:32.116811    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:32.116841    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:32.116877    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:32.116904    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:32.116916    7018 round_trippers.go:580]     Audit-Id: c5d1857d-a22f-42d9-aec9-08ad8e7331bd
	I0307 10:28:32.116950    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:32.116966    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:32.116973    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:32.118187    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82836 chars]
	I0307 10:28:32.119945    7018 system_pods.go:59] 12 kube-system pods found
	I0307 10:28:32.119954    7018 system_pods.go:61] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
	I0307 10:28:32.119958    7018 system_pods.go:61] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
	I0307 10:28:32.119963    7018 system_pods.go:61] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
	I0307 10:28:32.119967    7018 system_pods.go:61] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
	I0307 10:28:32.119970    7018 system_pods.go:61] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
	I0307 10:28:32.119975    7018 system_pods.go:61] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
	I0307 10:28:32.119993    7018 system_pods.go:61] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running
	I0307 10:28:32.120000    7018 system_pods.go:61] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
	I0307 10:28:32.120004    7018 system_pods.go:61] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
	I0307 10:28:32.120008    7018 system_pods.go:61] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
	I0307 10:28:32.120011    7018 system_pods.go:61] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
	I0307 10:28:32.120016    7018 system_pods.go:61] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
	I0307 10:28:32.120019    7018 system_pods.go:74] duration metric: took 189.651129ms to wait for pod list to return data ...
	I0307 10:28:32.120025    7018 default_sa.go:34] waiting for default service account to be created ...
	I0307 10:28:32.313205    7018 request.go:622] Waited for 193.131438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
	I0307 10:28:32.313251    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
	I0307 10:28:32.313259    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:32.313271    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:32.313281    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:32.315756    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:32.315778    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:32.315809    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:32.315822    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:32.315830    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:32.315837    7018 round_trippers.go:580]     Content-Length: 262
	I0307 10:28:32.315843    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:32.315850    7018 round_trippers.go:580]     Audit-Id: ac7a8c42-5ffa-402f-970f-d1d5a6d3058d
	I0307 10:28:32.315857    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:32.315874    7018 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6e32b5cd-63bd-46a7-9ed5-ea842da6729c","resourceVersion":"325","creationTimestamp":"2023-03-07T18:18:42Z"}}]}
	I0307 10:28:32.316001    7018 default_sa.go:45] found service account: "default"
	I0307 10:28:32.316010    7018 default_sa.go:55] duration metric: took 195.9795ms for default service account to be created ...
	I0307 10:28:32.316018    7018 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 10:28:32.513632    7018 request.go:622] Waited for 197.482521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:32.513683    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:32.513691    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:32.513704    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:32.513718    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:32.517123    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:32.517133    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:32.517139    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:32.517144    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:32.517148    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:32.517154    7018 round_trippers.go:580]     Audit-Id: c5f53d8f-ee73-49a6-be78-6ca8c2200a8e
	I0307 10:28:32.517161    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:32.517168    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:32.517894    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82836 chars]
	I0307 10:28:32.519632    7018 system_pods.go:86] 12 kube-system pods found
	I0307 10:28:32.519641    7018 system_pods.go:89] "coredns-787d4945fb-x8m8v" [c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6] Running
	I0307 10:28:32.519650    7018 system_pods.go:89] "etcd-multinode-260000" [aa53b0f1-968e-450d-90b2-ad26a79cea99] Running
	I0307 10:28:32.519654    7018 system_pods.go:89] "kindnet-gfgwn" [64dc8044-f77e-41b4-bb19-1a254bf29e05] Running
	I0307 10:28:32.519659    7018 system_pods.go:89] "kindnet-j5gj9" [f17b9702-c5c0-4b31-a136-e0370bc62d79] Running
	I0307 10:28:32.519664    7018 system_pods.go:89] "kindnet-z6kqp" [4884d21b-1be9-4b53-8f70-dd4fe0efa264] Running
	I0307 10:28:32.519668    7018 system_pods.go:89] "kube-apiserver-multinode-260000" [64ba25bc-eee2-433a-b0ef-a13769f04555] Running
	I0307 10:28:32.519671    7018 system_pods.go:89] "kube-controller-manager-multinode-260000" [8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c] Running
	I0307 10:28:32.519675    7018 system_pods.go:89] "kube-proxy-8qwhq" [3e455149-bbe2-4173-a413-f4962626b233] Running
	I0307 10:28:32.519679    7018 system_pods.go:89] "kube-proxy-pxshj" [3ee33e87-083d-4833-a6d4-8b459ec6ea70] Running
	I0307 10:28:32.519683    7018 system_pods.go:89] "kube-proxy-q8cm8" [b9f69548-a872-4d80-aa73-ffba99b33229] Running
	I0307 10:28:32.519686    7018 system_pods.go:89] "kube-scheduler-multinode-260000" [0739e1eb-4026-47ee-b2fe-6a9901c77317] Running
	I0307 10:28:32.519690    7018 system_pods.go:89] "storage-provisioner" [0b88c317-8e90-4927-b4f8-cae5597b5dc8] Running
	I0307 10:28:32.519694    7018 system_pods.go:126] duration metric: took 203.671188ms to wait for k8s-apps to be running ...
	I0307 10:28:32.519699    7018 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 10:28:32.519751    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:28:32.528776    7018 system_svc.go:56] duration metric: took 9.073723ms WaitForService to wait for kubelet.
	I0307 10:28:32.528791    7018 kubeadm.go:578] duration metric: took 14.942639871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0307 10:28:32.528801    7018 node_conditions.go:102] verifying NodePressure condition ...
	I0307 10:28:32.714684    7018 request.go:622] Waited for 185.826429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes
	I0307 10:28:32.725835    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
	I0307 10:28:32.725851    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:32.725863    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:32.725878    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:32.728446    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:32.728460    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:32.728468    7018 round_trippers.go:580]     Audit-Id: baedd684-4a38-47c3-8b1a-5bac961a5fbc
	I0307 10:28:32.728477    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:32.728490    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:32.728500    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:32.728507    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:32.728514    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:32 GMT
	I0307 10:28:32.728762    7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1162"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16210 chars]
	I0307 10:28:32.729257    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:32.729266    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:32.729274    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:32.729278    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:32.729282    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:28:32.729286    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:28:32.729289    7018 node_conditions.go:105] duration metric: took 200.482518ms to run NodePressure ...
	I0307 10:28:32.729297    7018 start.go:228] waiting for startup goroutines ...
	I0307 10:28:32.729302    7018 start.go:233] waiting for cluster config update ...
	I0307 10:28:32.729308    7018 start.go:242] writing updated cluster config ...
	I0307 10:28:32.729786    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:28:32.729851    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:28:32.751369    7018 out.go:177] * Starting worker node multinode-260000-m02 in cluster multinode-260000
	I0307 10:28:32.794328    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:28:32.794413    7018 cache.go:57] Caching tarball of preloaded images
	I0307 10:28:32.794583    7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 10:28:32.794601    7018 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 10:28:32.794723    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:28:32.795675    7018 cache.go:193] Successfully downloaded all kic artifacts
	I0307 10:28:32.795702    7018 start.go:364] acquiring machines lock for multinode-260000-m02: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:28:32.795787    7018 start.go:368] acquired machines lock for "multinode-260000-m02" in 65.198µs
	I0307 10:28:32.795817    7018 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:28:32.795824    7018 fix.go:55] fixHost starting: m02
	I0307 10:28:32.796234    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:28:32.796271    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:28:32.804078    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51665
	I0307 10:28:32.804430    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:28:32.804833    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:28:32.804855    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:28:32.805065    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:28:32.805179    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:32.805269    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetState
	I0307 10:28:32.805361    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:28:32.805423    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid from json: 6295
	I0307 10:28:32.806220    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid 6295 missing from process table
	I0307 10:28:32.806256    7018 fix.go:103] recreateIfNeeded on multinode-260000-m02: state=Stopped err=<nil>
	I0307 10:28:32.806268    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	W0307 10:28:32.806350    7018 fix.go:129] unexpected machine state, will restart: <nil>
	I0307 10:28:32.827377    7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000-m02" ...
	I0307 10:28:32.869734    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .Start
	I0307 10:28:32.869997    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:28:32.870091    7018 main.go:141] libmachine: (multinode-260000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid
	I0307 10:28:32.871656    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid 6295 missing from process table
	I0307 10:28:32.871680    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | pid 6295 is in state "Stopped"
	I0307 10:28:32.871712    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid...
	I0307 10:28:32.871965    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Using UUID 835471be-bd14-11ed-9c3c-149d997fca88
	I0307 10:28:32.899206    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Generated MAC ba:65:3c:6f:8d:dc
	I0307 10:28:32.899232    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
	I0307 10:28:32.899404    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"835471be-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000395b00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0307 10:28:32.899444    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"835471be-bd14-11ed-9c3c-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000395b00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0307 10:28:32.899480    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "835471be-bd14-11ed-9c3c-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/multinode-260000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage,/Users/j
enkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
	I0307 10:28:32.899519    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 835471be-bd14-11ed-9c3c-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/multinode-260000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/mult
inode-260000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
	I0307 10:28:32.899533    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0307 10:28:32.900716    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 DEBUG: hyperkit: Pid is 7098
	I0307 10:28:32.901058    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Attempt 0
	I0307 10:28:32.901070    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:28:32.901159    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid from json: 7098
	I0307 10:28:32.902759    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Searching for ba:65:3c:6f:8d:dc in /var/db/dhcpd_leases ...
	I0307 10:28:32.902821    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0307 10:28:32.902837    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d38e}
	I0307 10:28:32.902848    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
	I0307 10:28:32.902856    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
	I0307 10:28:32.902881    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d194}
	I0307 10:28:32.902892    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | Found match: ba:65:3c:6f:8d:dc
	I0307 10:28:32.902900    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | IP: 192.168.64.13
	I0307 10:28:32.902925    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetConfigRaw
	I0307 10:28:32.903499    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
	I0307 10:28:32.903686    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:28:32.904005    7018 machine.go:88] provisioning docker machine ...
	I0307 10:28:32.904016    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:32.904127    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
	I0307 10:28:32.904238    7018 buildroot.go:166] provisioning hostname "multinode-260000-m02"
	I0307 10:28:32.904248    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
	I0307 10:28:32.904335    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:32.904423    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:32.904506    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:32.904579    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:32.904654    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:32.904766    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:32.905083    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:32.905099    7018 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-260000-m02 && echo "multinode-260000-m02" | sudo tee /etc/hostname
	I0307 10:28:32.907073    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0307 10:28:32.914845    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0307 10:28:32.915562    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:28:32.915575    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:28:32.915583    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:28:32.915590    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:28:33.270333    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0307 10:28:33.270350    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0307 10:28:33.374324    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:28:33.374345    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:28:33.374362    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:28:33.374375    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:28:33.375209    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0307 10:28:33.375231    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0307 10:28:37.885819    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0307 10:28:37.885892    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0307 10:28:37.885906    7018 main.go:141] libmachine: (multinode-260000-m02) DBG | 2023/03/07 10:28:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0307 10:28:43.994445    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000-m02
	
	I0307 10:28:43.994460    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:43.994617    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:43.994725    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:43.994819    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:43.994903    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:43.995031    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:43.995375    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:43.995387    7018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-260000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-260000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:28:44.074363    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:28:44.074384    7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
	I0307 10:28:44.074392    7018 buildroot.go:174] setting up certificates
	I0307 10:28:44.074399    7018 provision.go:83] configureAuth start
	I0307 10:28:44.074407    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetMachineName
	I0307 10:28:44.074531    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
	I0307 10:28:44.074611    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:44.074689    7018 provision.go:138] copyHostCerts
	I0307 10:28:44.074731    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:28:44.074787    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
	I0307 10:28:44.074794    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:28:44.074898    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
	I0307 10:28:44.075070    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:28:44.075104    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
	I0307 10:28:44.075109    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:28:44.075176    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
	I0307 10:28:44.075308    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:28:44.075341    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
	I0307 10:28:44.075345    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:28:44.075412    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
	I0307 10:28:44.075534    7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000-m02 san=[192.168.64.13 192.168.64.13 localhost 127.0.0.1 minikube multinode-260000-m02]
	I0307 10:28:44.229773    7018 provision.go:172] copyRemoteCerts
	I0307 10:28:44.229826    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:28:44.229842    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:44.229985    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:44.230082    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.230172    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:44.230271    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
	I0307 10:28:44.272044    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0307 10:28:44.272115    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 10:28:44.288148    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0307 10:28:44.288225    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0307 10:28:44.303969    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0307 10:28:44.304037    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 10:28:44.319850    7018 provision.go:86] duration metric: configureAuth took 245.441923ms
	I0307 10:28:44.319862    7018 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:28:44.320030    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:28:44.320045    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:44.320174    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:44.320276    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:44.320360    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.320463    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.320545    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:44.320659    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:44.320957    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:44.320966    7018 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:28:44.395776    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:28:44.395788    7018 buildroot.go:70] root file system type: tmpfs
	I0307 10:28:44.395864    7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:28:44.395879    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:44.396009    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:44.396095    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.396175    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.396263    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:44.396386    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:44.396702    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:44.396747    7018 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.64.12"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:28:44.478924    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.64.12
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:28:44.478942    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:44.479070    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:44.479153    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.479233    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:44.479316    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:44.479441    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:44.479748    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:44.479760    7018 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:28:45.040521    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 10:28:45.040534    7018 machine.go:91] provisioned docker machine in 12.136465556s
	I0307 10:28:45.040540    7018 start.go:300] post-start starting for "multinode-260000-m02" (driver="hyperkit")
	I0307 10:28:45.040546    7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:28:45.040555    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:45.040748    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:28:45.040760    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:45.040882    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:45.040972    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:45.041059    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:45.041157    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
	I0307 10:28:45.087397    7018 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:28:45.091149    7018 command_runner.go:130] > NAME=Buildroot
	I0307 10:28:45.091158    7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
	I0307 10:28:45.091162    7018 command_runner.go:130] > ID=buildroot
	I0307 10:28:45.091166    7018 command_runner.go:130] > VERSION_ID=2021.02.12
	I0307 10:28:45.091170    7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0307 10:28:45.091259    7018 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 10:28:45.091268    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
	I0307 10:28:45.091351    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
	I0307 10:28:45.091498    7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
	I0307 10:28:45.091504    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
	I0307 10:28:45.091663    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:28:45.100582    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:28:45.126802    7018 start.go:303] post-start completed in 86.252226ms
	I0307 10:28:45.126814    7018 fix.go:57] fixHost completed within 12.330934005s
	I0307 10:28:45.126826    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:45.126964    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:45.127056    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:45.127154    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:45.127232    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:45.127364    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:28:45.127672    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.13 22 <nil> <nil>}
	I0307 10:28:45.127680    7018 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 10:28:45.202858    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213725.334485743
	
	I0307 10:28:45.202870    7018 fix.go:207] guest clock: 1678213725.334485743
	I0307 10:28:45.202880    7018 fix.go:220] Guest: 2023-03-07 10:28:45.334485743 -0800 PST Remote: 2023-03-07 10:28:45.126816 -0800 PST m=+87.461319305 (delta=207.669743ms)
	I0307 10:28:45.202890    7018 fix.go:191] guest clock delta is within tolerance: 207.669743ms
	I0307 10:28:45.202894    7018 start.go:83] releasing machines lock for "multinode-260000-m02", held for 12.407039272s
	I0307 10:28:45.202911    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:45.203045    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
	I0307 10:28:45.229173    7018 out.go:177] * Found network options:
	I0307 10:28:45.249904    7018 out.go:177]   - NO_PROXY=192.168.64.12
	W0307 10:28:45.271748    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 10:28:45.271793    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:45.272543    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:45.272757    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:28:45.272892    7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 10:28:45.272940    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	W0307 10:28:45.273042    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 10:28:45.273135    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:45.273147    7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 10:28:45.273165    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:28:45.273342    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:45.273376    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:28:45.273607    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:28:45.273659    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:45.273827    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:28:45.273861    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
	I0307 10:28:45.274044    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
	I0307 10:28:45.313860    7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0307 10:28:45.314024    7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:28:45.314083    7018 ssh_runner.go:195] Run: which cri-dockerd
	I0307 10:28:45.353726    7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 10:28:45.354872    7018 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 10:28:45.355027    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 10:28:45.362451    7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 10:28:45.373398    7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 10:28:45.384177    7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0307 10:28:45.384307    7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 10:28:45.384316    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:28:45.384403    7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:28:45.401772    7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
	I0307 10:28:45.401790    7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 10:28:45.401795    7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 10:28:45.401801    7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 10:28:45.401805    7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 10:28:45.401809    7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 10:28:45.401813    7018 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 10:28:45.401818    7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 10:28:45.401823    7018 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0307 10:28:45.401828    7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:28:45.401832    7018 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0307 10:28:45.402825    7018 docker.go:630] Got preloaded images: -- stdout --
	kindest/kindnetd:v20230227-15197099
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0307 10:28:45.402834    7018 docker.go:560] Images already preloaded, skipping extraction
	I0307 10:28:45.402840    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:28:45.402914    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:28:45.415287    7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:28:45.415302    7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:28:45.415537    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 10:28:45.422829    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:28:45.429702    7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:28:45.429750    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:28:45.436708    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:28:45.443666    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:28:45.450827    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:28:45.457881    7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:28:45.464910    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:28:45.471731    7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:28:45.477787    7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 10:28:45.477987    7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:28:45.484272    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:28:45.566893    7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:28:45.578247    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:28:45.578332    7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:28:45.587719    7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0307 10:28:45.588048    7018 command_runner.go:130] > [Unit]
	I0307 10:28:45.588056    7018 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 10:28:45.588070    7018 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 10:28:45.588078    7018 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0307 10:28:45.588085    7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0307 10:28:45.588091    7018 command_runner.go:130] > StartLimitBurst=3
	I0307 10:28:45.588111    7018 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 10:28:45.588119    7018 command_runner.go:130] > [Service]
	I0307 10:28:45.588126    7018 command_runner.go:130] > Type=notify
	I0307 10:28:45.588130    7018 command_runner.go:130] > Restart=on-failure
	I0307 10:28:45.588134    7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12
	I0307 10:28:45.588141    7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 10:28:45.588148    7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 10:28:45.588153    7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 10:28:45.588159    7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 10:28:45.588164    7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 10:28:45.588170    7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 10:28:45.588176    7018 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 10:28:45.588189    7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 10:28:45.588195    7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 10:28:45.588199    7018 command_runner.go:130] > ExecStart=
	I0307 10:28:45.588218    7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0307 10:28:45.588223    7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 10:28:45.588228    7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 10:28:45.588234    7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 10:28:45.588238    7018 command_runner.go:130] > LimitNOFILE=infinity
	I0307 10:28:45.588247    7018 command_runner.go:130] > LimitNPROC=infinity
	I0307 10:28:45.588253    7018 command_runner.go:130] > LimitCORE=infinity
	I0307 10:28:45.588259    7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 10:28:45.588263    7018 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 10:28:45.588267    7018 command_runner.go:130] > TasksMax=infinity
	I0307 10:28:45.588270    7018 command_runner.go:130] > TimeoutStartSec=0
	I0307 10:28:45.588276    7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 10:28:45.588279    7018 command_runner.go:130] > Delegate=yes
	I0307 10:28:45.588284    7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 10:28:45.588294    7018 command_runner.go:130] > KillMode=process
	I0307 10:28:45.588298    7018 command_runner.go:130] > [Install]
	I0307 10:28:45.588302    7018 command_runner.go:130] > WantedBy=multi-user.target
	I0307 10:28:45.588380    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:28:45.599940    7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:28:45.612861    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:28:45.622327    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:28:45.630580    7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 10:28:45.653722    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:28:45.662024    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:28:45.674917    7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:28:45.674931    7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:28:45.674988    7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:28:45.756263    7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:28:45.846497    7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:28:45.846514    7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 10:28:45.858511    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:28:45.944748    7018 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:28:47.255144    7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.310371403s)
	I0307 10:28:47.255214    7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:28:47.335677    7018 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 10:28:47.417454    7018 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:28:47.513228    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:28:47.598471    7018 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 10:28:47.611967    7018 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 10:28:47.612060    7018 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 10:28:47.616814    7018 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0307 10:28:47.616826    7018 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0307 10:28:47.616831    7018 command_runner.go:130] > Device: 16h/22d	Inode: 852         Links: 1
	I0307 10:28:47.616837    7018 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0307 10:28:47.616851    7018 command_runner.go:130] > Access: 2023-03-07 18:28:47.742167434 +0000
	I0307 10:28:47.616856    7018 command_runner.go:130] > Modify: 2023-03-07 18:28:47.742167434 +0000
	I0307 10:28:47.616860    7018 command_runner.go:130] > Change: 2023-03-07 18:28:47.744167434 +0000
	I0307 10:28:47.616865    7018 command_runner.go:130] >  Birth: -
	I0307 10:28:47.617043    7018 start.go:553] Will wait 60s for crictl version
	I0307 10:28:47.617089    7018 ssh_runner.go:195] Run: which crictl
	I0307 10:28:47.619815    7018 command_runner.go:130] > /usr/bin/crictl
	I0307 10:28:47.619873    7018 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 10:28:47.691285    7018 command_runner.go:130] > Version:  0.1.0
	I0307 10:28:47.691297    7018 command_runner.go:130] > RuntimeName:  docker
	I0307 10:28:47.691301    7018 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0307 10:28:47.691305    7018 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0307 10:28:47.692228    7018 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0307 10:28:47.692301    7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:28:47.711035    7018 command_runner.go:130] > 20.10.23
	I0307 10:28:47.728475    7018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 10:28:47.749259    7018 command_runner.go:130] > 20.10.23
	I0307 10:28:47.770120    7018 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
	I0307 10:28:47.813210    7018 out.go:177]   - env NO_PROXY=192.168.64.12
	I0307 10:28:47.835385    7018 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
	I0307 10:28:47.835775    7018 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0307 10:28:47.840292    7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:28:47.848646    7018 certs.go:56] Setting up /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000 for IP: 192.168.64.13
	I0307 10:28:47.848666    7018 certs.go:186] acquiring lock for shared ca certs: {Name:mk21aa92235e3b083ba3cf4a52527e5734aca22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:28:47.848814    7018 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key
	I0307 10:28:47.848878    7018 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key
	I0307 10:28:47.848891    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 10:28:47.848915    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0307 10:28:47.848940    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 10:28:47.848960    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 10:28:47.849045    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem (1338 bytes)
	W0307 10:28:47.849088    7018 certs.go:397] ignoring /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903_empty.pem, impossibly tiny 0 bytes
	I0307 10:28:47.849100    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 10:28:47.849141    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem (1082 bytes)
	I0307 10:28:47.849185    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem (1123 bytes)
	I0307 10:28:47.849224    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem (1675 bytes)
	I0307 10:28:47.849299    7018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:28:47.849342    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:28:47.849367    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem -> /usr/share/ca-certificates/3903.pem
	I0307 10:28:47.849386    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /usr/share/ca-certificates/39032.pem
	I0307 10:28:47.849662    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 10:28:47.865455    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0307 10:28:47.881052    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 10:28:47.896926    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0307 10:28:47.912741    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 10:28:47.928528    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/3903.pem --> /usr/share/ca-certificates/3903.pem (1338 bytes)
	I0307 10:28:47.945013    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /usr/share/ca-certificates/39032.pem (1708 bytes)
	I0307 10:28:47.960635    7018 ssh_runner.go:195] Run: openssl version
	I0307 10:28:47.964021    7018 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0307 10:28:47.964272    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 10:28:47.971316    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:28:47.974134    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:28:47.974290    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar  7 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:28:47.974333    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 10:28:47.977654    7018 command_runner.go:130] > b5213941
	I0307 10:28:47.977920    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 10:28:47.984887    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3903.pem && ln -fs /usr/share/ca-certificates/3903.pem /etc/ssl/certs/3903.pem"
	I0307 10:28:47.992249    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3903.pem
	I0307 10:28:47.995266    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 18:06 /usr/share/ca-certificates/3903.pem
	I0307 10:28:47.995458    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar  7 18:06 /usr/share/ca-certificates/3903.pem
	I0307 10:28:47.995499    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3903.pem
	I0307 10:28:47.998865    7018 command_runner.go:130] > 51391683
	I0307 10:28:47.999120    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3903.pem /etc/ssl/certs/51391683.0"
	I0307 10:28:48.006141    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/39032.pem && ln -fs /usr/share/ca-certificates/39032.pem /etc/ssl/certs/39032.pem"
	I0307 10:28:48.013240    7018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/39032.pem
	I0307 10:28:48.016074    7018 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 18:06 /usr/share/ca-certificates/39032.pem
	I0307 10:28:48.016260    7018 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar  7 18:06 /usr/share/ca-certificates/39032.pem
	I0307 10:28:48.016294    7018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/39032.pem
	I0307 10:28:48.019631    7018 command_runner.go:130] > 3ec20f2e
	I0307 10:28:48.019880    7018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/39032.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 10:28:48.026902    7018 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 10:28:48.048324    7018 command_runner.go:130] > cgroupfs
	I0307 10:28:48.048980    7018 cni.go:84] Creating CNI manager for ""
	I0307 10:28:48.048990    7018 cni.go:136] 3 nodes found, recommending kindnet
	I0307 10:28:48.048997    7018 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0307 10:28:48.049008    7018 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.13 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-260000 NodeName:multinode-260000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0307 10:28:48.049099    7018 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-260000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 10:28:48.049134    7018 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-260000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0307 10:28:48.049192    7018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0307 10:28:48.055441    7018 command_runner.go:130] > kubeadm
	I0307 10:28:48.055448    7018 command_runner.go:130] > kubectl
	I0307 10:28:48.055454    7018 command_runner.go:130] > kubelet
	I0307 10:28:48.055533    7018 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 10:28:48.055575    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0307 10:28:48.061804    7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (453 bytes)
	I0307 10:28:48.072809    7018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 10:28:48.083885    7018 ssh_runner.go:195] Run: grep 192.168.64.12	control-plane.minikube.internal$ /etc/hosts
	I0307 10:28:48.086255    7018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 10:28:48.093971    7018 host.go:66] Checking if "multinode-260000" exists ...
	I0307 10:28:48.094151    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:28:48.094253    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:28:48.094274    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:28:48.101209    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51684
	I0307 10:28:48.101550    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:28:48.101900    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:28:48.101916    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:28:48.102150    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:28:48.102258    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:28:48.102341    7018 start.go:301] JoinCluster: &{Name:multinode-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.26.2 ClusterName:multinode-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.64.15 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP:}
	I0307 10:28:48.102433    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0307 10:28:48.102443    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:28:48.102521    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:28:48.102622    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:28:48.102707    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:28:48.102782    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:28:48.189788    7018 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af 
	I0307 10:28:48.189814    7018 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 10:28:48.189833    7018 host.go:66] Checking if "multinode-260000" exists ...
	I0307 10:28:48.190161    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:28:48.190186    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:28:48.196916    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51687
	I0307 10:28:48.197249    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:28:48.197612    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:28:48.197624    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:28:48.197818    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:28:48.197901    7018 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:28:48.198033    7018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl drain multinode-260000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0307 10:28:48.198050    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:28:48.198133    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:28:48.198209    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:28:48.198294    7018 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:28:48.198376    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:28:48.295688    7018 command_runner.go:130] > node/multinode-260000-m02 cordoned
	I0307 10:28:51.318733    7018 command_runner.go:130] > pod "busybox-6b86dd6d48-dmrds" has DeletionTimestamp older than 1 seconds, skipping
	I0307 10:28:51.318748    7018 command_runner.go:130] > node/multinode-260000-m02 drained
	I0307 10:28:51.319712    7018 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0307 10:28:51.319724    7018 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-z6kqp, kube-system/kube-proxy-pxshj
	I0307 10:28:51.319743    7018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl drain multinode-260000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.121678108s)
	I0307 10:28:51.319753    7018 node.go:109] successfully drained node "m02"
	I0307 10:28:51.320044    7018 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:28:51.320243    7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:28:51.320537    7018 request.go:1171] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0307 10:28:51.320569    7018 round_trippers.go:463] DELETE https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:51.320574    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:51.320580    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:51.320586    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:51.320592    7018 round_trippers.go:473]     Content-Type: application/json
	I0307 10:28:51.323598    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:51.323609    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:51.323615    7018 round_trippers.go:580]     Audit-Id: d4c330be-b2e7-4781-aecc-cf162ed512f1
	I0307 10:28:51.323620    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:51.323625    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:51.323630    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:51.323636    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:51.323643    7018 round_trippers.go:580]     Content-Length: 171
	I0307 10:28:51.323649    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:51 GMT
	I0307 10:28:51.323663    7018 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-260000-m02","kind":"nodes","uid":"75f8e0c4-47f4-43dc-ac5e-5f77d8d4ab3b"}}
	I0307 10:28:51.323690    7018 node.go:125] successfully deleted node "m02"
	I0307 10:28:51.323697    7018 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 10:28:51.323715    7018 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 10:28:51.323731    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-260000-m02"
	I0307 10:28:51.374604    7018 command_runner.go:130] ! W0307 18:28:51.510767    1198 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0307 10:28:51.505076    7018 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 10:28:53.147207    7018 command_runner.go:130] > [preflight] Running pre-flight checks
	I0307 10:28:53.147229    7018 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0307 10:28:53.147240    7018 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0307 10:28:53.147249    7018 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 10:28:53.147258    7018 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 10:28:53.147266    7018 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0307 10:28:53.147275    7018 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0307 10:28:53.147285    7018 command_runner.go:130] > This node has joined the cluster:
	I0307 10:28:53.147294    7018 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0307 10:28:53.147304    7018 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0307 10:28:53.147313    7018 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0307 10:28:53.147327    7018 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zh6icb.v6kqx4onyxvfd8hz --discovery-token-ca-cert-hash sha256:d33f97e9e16d7e3e3153d34b9abf6cc9c10aed60f07ce313a956e9c1066684af --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-260000-m02": (1.823577721s)
	I0307 10:28:53.147343    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0307 10:28:53.256139    7018 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0307 10:28:53.347575    7018 start.go:303] JoinCluster complete in 5.245201975s
	I0307 10:28:53.347588    7018 cni.go:84] Creating CNI manager for ""
	I0307 10:28:53.347594    7018 cni.go:136] 3 nodes found, recommending kindnet
	I0307 10:28:53.347676    7018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 10:28:53.350863    7018 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0307 10:28:53.350874    7018 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0307 10:28:53.350882    7018 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0307 10:28:53.350888    7018 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 10:28:53.350895    7018 command_runner.go:130] > Access: 2023-03-07 18:27:25.800133630 +0000
	I0307 10:28:53.350899    7018 command_runner.go:130] > Modify: 2023-02-24 23:58:49.000000000 +0000
	I0307 10:28:53.350904    7018 command_runner.go:130] > Change: 2023-03-07 18:27:24.520133706 +0000
	I0307 10:28:53.350907    7018 command_runner.go:130] >  Birth: -
	I0307 10:28:53.350976    7018 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0307 10:28:53.350986    7018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0307 10:28:53.365774    7018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 10:28:53.573328    7018 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0307 10:28:53.576007    7018 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0307 10:28:53.577626    7018 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0307 10:28:53.586569    7018 command_runner.go:130] > daemonset.apps/kindnet configured
	I0307 10:28:53.588317    7018 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:28:53.588503    7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:28:53.588731    7018 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0307 10:28:53.588737    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:53.588744    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:53.588750    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:53.590037    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:53.590045    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:53.590053    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:53.590058    7018 round_trippers.go:580]     Content-Length: 292
	I0307 10:28:53.590065    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:53 GMT
	I0307 10:28:53.590074    7018 round_trippers.go:580]     Audit-Id: 09b51ea0-529c-4d47-a052-cef6398d810c
	I0307 10:28:53.590096    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:53.590105    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:53.590110    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:53.590121    7018 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9058bb7-5525-4245-a92a-3b0f0144c5d4","resourceVersion":"1155","creationTimestamp":"2023-03-07T18:18:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0307 10:28:53.590164    7018 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-260000" context rescaled to 1 replicas
	I0307 10:28:53.590178    7018 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.64.13 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0307 10:28:53.633568    7018 out.go:177] * Verifying Kubernetes components...
	I0307 10:28:53.691468    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:28:53.703497    7018 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:28:53.703698    7018 kapi.go:59] client config for multinode-260000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/15985-3430/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2547800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 10:28:53.703918    7018 node_ready.go:35] waiting up to 6m0s for node "multinode-260000-m02" to be "Ready" ...
	I0307 10:28:53.703963    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:53.703968    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:53.703974    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:53.703981    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:53.705420    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:53.705433    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:53.705439    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:53.705445    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:53 GMT
	I0307 10:28:53.705455    7018 round_trippers.go:580]     Audit-Id: e2d373c1-190f-45e0-b9cf-3d8d054fb1e3
	I0307 10:28:53.705460    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:53.705465    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:53.705470    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:53.705557    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:54.205959    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:54.205976    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:54.205988    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:54.205995    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:54.208023    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:54.208036    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:54.208042    7018 round_trippers.go:580]     Audit-Id: 162bfd38-128d-4c94-8620-4dd73b77dd1a
	I0307 10:28:54.208050    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:54.208055    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:54.208065    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:54.208073    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:54.208080    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:54 GMT
	I0307 10:28:54.208268    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:54.706066    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:54.706077    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:54.706084    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:54.706089    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:54.708076    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:54.708088    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:54.708095    7018 round_trippers.go:580]     Audit-Id: dd80323e-e17e-4577-b133-2911fcce9fc1
	I0307 10:28:54.708100    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:54.708105    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:54.708110    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:54.708115    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:54.708120    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:54 GMT
	I0307 10:28:54.708207    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:55.206158    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:55.206172    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:55.206179    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:55.206184    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:55.207805    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:55.207815    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:55.207820    7018 round_trippers.go:580]     Audit-Id: 9200c148-32d8-4985-98ec-72d4b636ae7e
	I0307 10:28:55.207825    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:55.207831    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:55.207835    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:55.207840    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:55.207845    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:55 GMT
	I0307 10:28:55.207923    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:55.706104    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:55.706119    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:55.706125    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:55.706131    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:55.707769    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:55.707783    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:55.707791    7018 round_trippers.go:580]     Audit-Id: 0773193b-a44b-4173-a89e-1b4397280289
	I0307 10:28:55.707797    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:55.707803    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:55.707808    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:55.707813    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:55.707818    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:55 GMT
	I0307 10:28:55.707892    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:55.708076    7018 node_ready.go:58] node "multinode-260000-m02" has status "Ready":"False"
	I0307 10:28:56.205958    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:56.205974    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:56.205981    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:56.205986    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:56.207374    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:56.207390    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:56.207399    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:56.207406    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:56.207412    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:56 GMT
	I0307 10:28:56.207418    7018 round_trippers.go:580]     Audit-Id: 0b890c7d-2626-4ab5-8e75-3a16b9eecf54
	I0307 10:28:56.207427    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:56.207433    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:56.207515    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1201","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4663 chars]
	I0307 10:28:56.705900    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:56.705916    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:56.705923    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:56.705928    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:56.707741    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:56.707756    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:56.707766    7018 round_trippers.go:580]     Audit-Id: 0e59b396-e7bf-4b72-b74c-a01f645f9864
	I0307 10:28:56.707778    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:56.707804    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:56.707821    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:56.707834    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:56.707842    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:56 GMT
	I0307 10:28:56.707912    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
	I0307 10:28:57.206205    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:57.206216    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:57.206228    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:57.206234    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:57.207878    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:57.207889    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:57.207894    7018 round_trippers.go:580]     Audit-Id: a4dcdc28-4a89-41fc-a490-5614c72a2f7c
	I0307 10:28:57.207900    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:57.207905    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:57.207913    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:57.207918    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:57.207923    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:57 GMT
	I0307 10:28:57.208010    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
	I0307 10:28:57.706332    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:57.727379    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:57.727424    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:57.727437    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:57.731183    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:57.731198    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:57.731206    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:57 GMT
	I0307 10:28:57.731221    7018 round_trippers.go:580]     Audit-Id: f535ff1c-e3e0-4a4e-acf9-6dabcd316387
	I0307 10:28:57.731231    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:57.731241    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:57.731249    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:57.731255    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:57.731338    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1221","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4772 chars]
	I0307 10:28:57.731568    7018 node_ready.go:58] node "multinode-260000-m02" has status "Ready":"False"
	I0307 10:28:58.206943    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:58.206954    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.206960    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.206966    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.208597    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.208612    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.208617    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.208623    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.208628    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.208633    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.208638    7018 round_trippers.go:580]     Audit-Id: 14bb95b4-52c5-49f6-baee-19c30e38be33
	I0307 10:28:58.208643    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.208733    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1235","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4619 chars]
	I0307 10:28:58.208922    7018 node_ready.go:49] node "multinode-260000-m02" has status "Ready":"True"
	I0307 10:28:58.208932    7018 node_ready.go:38] duration metric: took 4.5049847s waiting for node "multinode-260000-m02" to be "Ready" ...
	I0307 10:28:58.208937    7018 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:58.208966    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0307 10:28:58.208970    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.208977    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.208983    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.211168    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:58.211181    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.211186    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.211192    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.211200    7018 round_trippers.go:580]     Audit-Id: 9e29ae0f-c0b8-46e2-b2ef-ac7c8b7cd885
	I0307 10:28:58.211206    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.211211    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.211218    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.212031    7018 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1235"},"items":[{"metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83248 chars]
	I0307 10:28:58.213928    7018 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.213959    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-x8m8v
	I0307 10:28:58.213966    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.213972    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.213977    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.215266    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.215275    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.215280    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.215285    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.215299    7018 round_trippers.go:580]     Audit-Id: da0297af-ddf8-40bb-ba7e-ee7c25d1d50b
	I0307 10:28:58.215307    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.215315    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.215322    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.215421    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-x8m8v","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6","resourceVersion":"1151","creationTimestamp":"2023-03-07T18:18:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"26b2d6d5-2690-443d-9301-cc21f0f563e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26b2d6d5-2690-443d-9301-cc21f0f563e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6489 chars]
	I0307 10:28:58.215654    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.215660    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.215667    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.215673    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.217001    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.217011    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.217018    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.217023    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.217030    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.217035    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.217044    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.217052    7018 round_trippers.go:580]     Audit-Id: bcd5819d-b6c4-402c-84d8-8b34af188a85
	I0307 10:28:58.217231    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:58.217408    7018 pod_ready.go:92] pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:58.217413    7018 pod_ready.go:81] duration metric: took 3.477588ms waiting for pod "coredns-787d4945fb-x8m8v" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.217418    7018 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.217449    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-260000
	I0307 10:28:58.217455    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.217463    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.217469    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.218541    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.218548    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.218553    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.218559    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.218569    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.218574    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.218579    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.218584    7018 round_trippers.go:580]     Audit-Id: 3ecf0cc4-5524-4969-bf64-78cbfa7bcc64
	I0307 10:28:58.218670    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-260000","namespace":"kube-system","uid":"aa53b0f1-968e-450d-90b2-ad26a79cea99","resourceVersion":"1080","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.mirror":"850c338aca464a5a11d41064b4e68a40","kubernetes.io/config.seen":"2023-03-07T18:18:28.739530548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6056 chars]
	I0307 10:28:58.218878    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.218884    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.218890    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.218895    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.220222    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.220239    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.220246    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.220251    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.220256    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.220262    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.220268    7018 round_trippers.go:580]     Audit-Id: 16035865-fbff-46a4-82b6-1d4dc225f856
	I0307 10:28:58.220272    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.220340    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:58.220511    7018 pod_ready.go:92] pod "etcd-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:58.220516    7018 pod_ready.go:81] duration metric: took 3.092542ms waiting for pod "etcd-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.220524    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.220551    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-260000
	I0307 10:28:58.220555    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.220561    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.220566    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.221715    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.221722    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.221727    7018 round_trippers.go:580]     Audit-Id: db547fd7-e43b-49f4-9206-870682ba8ead
	I0307 10:28:58.221738    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.221744    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.221749    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.221754    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.221769    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.221904    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-260000","namespace":"kube-system","uid":"64ba25bc-eee2-433a-b0ef-a13769f04555","resourceVersion":"1143","creationTimestamp":"2023-03-07T18:18:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"76402f877907c95a3936143f580968be","kubernetes.io/config.mirror":"76402f877907c95a3936143f580968be","kubernetes.io/config.seen":"2023-03-07T18:18:28.739580253Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7591 chars]
	I0307 10:28:58.222136    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.222142    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.222148    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.222153    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.223204    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.223213    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.223218    7018 round_trippers.go:580]     Audit-Id: af2553b3-7312-4d2a-a007-6b34fbaa60fe
	I0307 10:28:58.223223    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.223229    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.223233    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.223239    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.223243    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.223402    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:58.223567    7018 pod_ready.go:92] pod "kube-apiserver-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:58.223572    7018 pod_ready.go:81] duration metric: took 3.043676ms waiting for pod "kube-apiserver-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.223578    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.223603    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-260000
	I0307 10:28:58.223607    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.223624    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.223632    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.224832    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.224840    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.224845    7018 round_trippers.go:580]     Audit-Id: 08c9fdf6-3267-4e2e-935f-9c4e84582ec5
	I0307 10:28:58.224850    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.224859    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.224864    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.224869    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.224874    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.225199    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-260000","namespace":"kube-system","uid":"8dd3c20d-2cb8-4c42-bca5-9c98a4c0901c","resourceVersion":"1131","creationTimestamp":"2023-03-07T18:18:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.mirror":"bd240742399200aca4d9b6c45788c837","kubernetes.io/config.seen":"2023-03-07T18:18:16.838236256Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7159 chars]
	I0307 10:28:58.225429    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.225437    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.225443    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.225449    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.226687    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.226694    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.226699    7018 round_trippers.go:580]     Audit-Id: 7796790d-620c-401a-9f3a-b4ce8b9acc5f
	I0307 10:28:58.226704    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.226710    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.226714    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.226719    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.226725    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.226885    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:58.227057    7018 pod_ready.go:92] pod "kube-controller-manager-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:58.227062    7018 pod_ready.go:81] duration metric: took 3.479487ms waiting for pod "kube-controller-manager-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.227067    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.407059    7018 request.go:622] Waited for 179.951206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
	I0307 10:28:58.407094    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8qwhq
	I0307 10:28:58.407101    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.407154    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.407160    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.408789    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:58.408801    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.408809    7018 round_trippers.go:580]     Audit-Id: c45ed864-b7ed-4df5-a14e-1c1a9c154846
	I0307 10:28:58.408817    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.408824    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.408829    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.408834    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.408845    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.409069    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8qwhq","generateName":"kube-proxy-","namespace":"kube-system","uid":"3e455149-bbe2-4173-a413-f4962626b233","resourceVersion":"1061","creationTimestamp":"2023-03-07T18:18:41Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0307 10:28:58.608673    7018 request.go:622] Waited for 199.329269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.608848    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:58.608860    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.608872    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.608882    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.611654    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:58.611670    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.611677    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.611684    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.611692    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.611701    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.611709    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.611715    7018 round_trippers.go:580]     Audit-Id: 76524fea-611e-49f8-bb7e-5eb3dc168072
	I0307 10:28:58.611840    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:58.612099    7018 pod_ready.go:92] pod "kube-proxy-8qwhq" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:58.612108    7018 pod_ready.go:81] duration metric: took 385.031837ms waiting for pod "kube-proxy-8qwhq" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.612116    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:58.808367    7018 request.go:622] Waited for 196.171802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:58.808492    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pxshj
	I0307 10:28:58.808504    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:58.808517    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:58.808529    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:58.811399    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:58.811415    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:58.811423    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:58.811429    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:58.811436    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:58 GMT
	I0307 10:28:58.811442    7018 round_trippers.go:580]     Audit-Id: 3bbb7a3c-520d-4a16-9e4e-62fab5920986
	I0307 10:28:58.811449    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:58.811455    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:58.811559    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pxshj","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ee33e87-083d-4833-a6d4-8b459ec6ea70","resourceVersion":"1218","creationTimestamp":"2023-03-07T18:19:13Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0307 10:28:59.008406    7018 request.go:622] Waited for 196.512217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:59.008597    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m02
	I0307 10:28:59.008608    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:59.008621    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:59.008631    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:59.011231    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:59.011250    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:59.011258    7018 round_trippers.go:580]     Audit-Id: a7a3df8f-11e9-4890-88c0-bd4fb1da521d
	I0307 10:28:59.011266    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:59.011273    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:59.011280    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:59.011289    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:59.011295    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:59 GMT
	I0307 10:28:59.011388    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m02","uid":"ad92b229-7a8c-479f-886f-f6bdf07e6c15","resourceVersion":"1235","creationTimestamp":"2023-03-07T18:28:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:28:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4619 chars]
	I0307 10:28:59.011635    7018 pod_ready.go:92] pod "kube-proxy-pxshj" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:59.011645    7018 pod_ready.go:81] duration metric: took 399.518428ms waiting for pod "kube-proxy-pxshj" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:59.011652    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:59.208322    7018 request.go:622] Waited for 196.555002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:59.208407    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8cm8
	I0307 10:28:59.208417    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:59.208432    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:59.208444    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:59.211802    7018 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 10:28:59.211825    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:59.211836    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:59.211865    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:59 GMT
	I0307 10:28:59.211875    7018 round_trippers.go:580]     Audit-Id: 80279da3-3584-4856-89d4-205b357cfc2e
	I0307 10:28:59.211901    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:59.211908    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:59.211916    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:59.212031    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8cm8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9f69548-a872-4d80-aa73-ffba99b33229","resourceVersion":"1005","creationTimestamp":"2023-03-07T18:26:06Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc097476-6e75-4c41-b587-b33736193800","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc097476-6e75-4c41-b587-b33736193800\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0307 10:28:59.407671    7018 request.go:622] Waited for 195.295612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:59.407782    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000-m03
	I0307 10:28:59.407790    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:59.407799    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:59.407807    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:59.409534    7018 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 10:28:59.409543    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:59.409549    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:59 GMT
	I0307 10:28:59.409562    7018 round_trippers.go:580]     Audit-Id: dced968d-8259-48a8-a369-67bdece8d0ff
	I0307 10:28:59.409577    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:59.409586    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:59.409591    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:59.409597    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:59.409645    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000-m03","uid":"c193c270-6b50-44d5-962f-c88bf307bb54","resourceVersion":"1109","creationTimestamp":"2023-03-07T18:26:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:26:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4330 chars]
	I0307 10:28:59.409824    7018 pod_ready.go:92] pod "kube-proxy-q8cm8" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:59.409830    7018 pod_ready.go:81] duration metric: took 398.16179ms waiting for pod "kube-proxy-q8cm8" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:59.409836    7018 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:59.607367    7018 request.go:622] Waited for 197.479712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:59.607426    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-260000
	I0307 10:28:59.607435    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:59.607535    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:59.607549    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:59.610313    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:59.610332    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:59.610344    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:59 GMT
	I0307 10:28:59.610351    7018 round_trippers.go:580]     Audit-Id: 831ac5c9-6a6e-4238-9a57-e226e9d7fa9a
	I0307 10:28:59.610359    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:59.610366    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:59.610373    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:59.610380    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:59.610482    7018 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-260000","namespace":"kube-system","uid":"0739e1eb-4026-47ee-b2fe-6a9901c77317","resourceVersion":"1139","creationTimestamp":"2023-03-07T18:18:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.mirror":"893f1932edb247b22dcb3c8a95f80e4d","kubernetes.io/config.seen":"2023-03-07T18:18:28.739583516Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T18:18:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4889 chars]
	I0307 10:28:59.807243    7018 request.go:622] Waited for 196.466836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:59.807382    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-260000
	I0307 10:28:59.807393    7018 round_trippers.go:469] Request Headers:
	I0307 10:28:59.807405    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:28:59.807416    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:28:59.809503    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:28:59.809522    7018 round_trippers.go:577] Response Headers:
	I0307 10:28:59.809534    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:28:59.809565    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:28:59 GMT
	I0307 10:28:59.809578    7018 round_trippers.go:580]     Audit-Id: 0db6ab63-4a4e-453d-ac64-1584164a0c7d
	I0307 10:28:59.809586    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:28:59.809593    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:28:59.809600    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:28:59.809729    7018 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-03-07T18:18:25Z","fieldsType":"FieldsV1","f [truncated 5330 chars]
	I0307 10:28:59.810013    7018 pod_ready.go:92] pod "kube-scheduler-multinode-260000" in "kube-system" namespace has status "Ready":"True"
	I0307 10:28:59.810022    7018 pod_ready.go:81] duration metric: took 400.179443ms waiting for pod "kube-scheduler-multinode-260000" in "kube-system" namespace to be "Ready" ...
	I0307 10:28:59.810030    7018 pod_ready.go:38] duration metric: took 1.60107891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 10:28:59.810045    7018 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 10:28:59.810114    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:28:59.818885    7018 system_svc.go:56] duration metric: took 8.836426ms WaitForService to wait for kubelet.
	I0307 10:28:59.818896    7018 kubeadm.go:578] duration metric: took 6.228675231s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0307 10:28:59.818910    7018 node_conditions.go:102] verifying NodePressure condition ...
	I0307 10:29:00.007159    7018 request.go:622] Waited for 188.194062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes
	I0307 10:29:00.007207    7018 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
	I0307 10:29:00.007270    7018 round_trippers.go:469] Request Headers:
	I0307 10:29:00.007282    7018 round_trippers.go:473]     Accept: application/json, */*
	I0307 10:29:00.007294    7018 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0307 10:29:00.010101    7018 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 10:29:00.010120    7018 round_trippers.go:577] Response Headers:
	I0307 10:29:00.010131    7018 round_trippers.go:580]     Audit-Id: 230c0ab3-666e-4727-a5a5-c4ebee390789
	I0307 10:29:00.010139    7018 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 10:29:00.010146    7018 round_trippers.go:580]     Content-Type: application/json
	I0307 10:29:00.010153    7018 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c92adc6f-8cfd-48e5-a937-f3c14b5e4585
	I0307 10:29:00.010162    7018 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7ba93426-f5a5-4d63-ad87-0c18c78a4061
	I0307 10:29:00.010174    7018 round_trippers.go:580]     Date: Tue, 07 Mar 2023 18:29:00 GMT
	I0307 10:29:00.010474    7018 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1235"},"items":[{"metadata":{"name":"multinode-260000","uid":"89da6a3f-6e4a-4e51-b2db-31d71eab4c40","resourceVersion":"1092","creationTimestamp":"2023-03-07T18:18:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-260000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"592b1e9939a898d806f69aad174a19c45f317df1","minikube.k8s.io/name":"multinode-260000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_07T10_18_30_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16317 chars]
	I0307 10:29:00.011046    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:29:00.011058    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:29:00.011066    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:29:00.011071    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:29:00.011075    7018 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0307 10:29:00.011082    7018 node_conditions.go:123] node cpu capacity is 2
	I0307 10:29:00.011087    7018 node_conditions.go:105] duration metric: took 192.17207ms to run NodePressure ...
	I0307 10:29:00.011096    7018 start.go:228] waiting for startup goroutines ...
	I0307 10:29:00.011118    7018 start.go:242] writing updated cluster config ...
	I0307 10:29:00.011876    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:29:00.012002    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:29:00.054733    7018 out.go:177] * Starting worker node multinode-260000-m03 in cluster multinode-260000
	I0307 10:29:00.075685    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:29:00.075744    7018 cache.go:57] Caching tarball of preloaded images
	I0307 10:29:00.075937    7018 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 10:29:00.075956    7018 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 10:29:00.076097    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:29:00.077109    7018 cache.go:193] Successfully downloaded all kic artifacts
	I0307 10:29:00.077151    7018 start.go:364] acquiring machines lock for multinode-260000-m03: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:29:00.077243    7018 start.go:368] acquired machines lock for "multinode-260000-m03" in 73.572µs
	I0307 10:29:00.077280    7018 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:29:00.077288    7018 fix.go:55] fixHost starting: m03
	I0307 10:29:00.077721    7018 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:29:00.077794    7018 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:29:00.085146    7018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51690
	I0307 10:29:00.085469    7018 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:29:00.085788    7018 main.go:141] libmachine: Using API Version  1
	I0307 10:29:00.085809    7018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:29:00.086053    7018 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:29:00.086177    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:00.086254    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetState
	I0307 10:29:00.086348    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:29:00.086412    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid from json: 6959
	I0307 10:29:00.087210    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid 6959 missing from process table
	I0307 10:29:00.087228    7018 fix.go:103] recreateIfNeeded on multinode-260000-m03: state=Stopped err=<nil>
	I0307 10:29:00.087236    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	W0307 10:29:00.087313    7018 fix.go:129] unexpected machine state, will restart: <nil>
	I0307 10:29:00.108838    7018 out.go:177] * Restarting existing hyperkit VM for "multinode-260000-m03" ...
	I0307 10:29:00.150753    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .Start
	I0307 10:29:00.151097    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:29:00.151124    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid
	I0307 10:29:00.151193    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Using UUID 79b2bd18-bd15-11ed-8f77-149d997fca88
	I0307 10:29:00.180096    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Generated MAC 12:aa:e8:53:6e:6b
	I0307 10:29:00.180120    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000
	I0307 10:29:00.180266    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"79b2bd18-bd15-11ed-8f77-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002c11a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0307 10:29:00.180309    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"79b2bd18-bd15-11ed-8f77-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002c11a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0307 10:29:00.180345    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "79b2bd18-bd15-11ed-8f77-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/multinode-260000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage,/Users/j
enkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"}
	I0307 10:29:00.180370    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 79b2bd18-bd15-11ed-8f77-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/multinode-260000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/mult
inode-260000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-260000"
	I0307 10:29:00.180383    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0307 10:29:00.181671    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 DEBUG: hyperkit: Pid is 7128
	I0307 10:29:00.182013    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Attempt 0
	I0307 10:29:00.182028    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:29:00.182112    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid from json: 7128
	I0307 10:29:00.183032    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Searching for 12:aa:e8:53:6e:6b in /var/db/dhcpd_leases ...
	I0307 10:29:00.183093    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0307 10:29:00.183123    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d3d8}
	I0307 10:29:00.183132    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d38e}
	I0307 10:29:00.183144    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x64078204}
	I0307 10:29:00.183153    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | Found match: 12:aa:e8:53:6e:6b
	I0307 10:29:00.183173    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | IP: 192.168.64.15
	I0307 10:29:00.183209    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetConfigRaw
	I0307 10:29:00.183787    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
	I0307 10:29:00.183966    7018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/multinode-260000/config.json ...
	I0307 10:29:00.184309    7018 machine.go:88] provisioning docker machine ...
	I0307 10:29:00.184319    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:00.184441    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
	I0307 10:29:00.184532    7018 buildroot.go:166] provisioning hostname "multinode-260000-m03"
	I0307 10:29:00.184543    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
	I0307 10:29:00.184630    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:00.184704    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:00.184784    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:00.184866    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:00.184944    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:00.185055    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:00.185361    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:00.185370    7018 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-260000-m03 && echo "multinode-260000-m03" | sudo tee /etc/hostname
	I0307 10:29:00.188080    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0307 10:29:00.195643    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0307 10:29:00.196371    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:29:00.196384    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:29:00.196392    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:29:00.196404    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:29:00.552977    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0307 10:29:00.552995    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0307 10:29:00.657061    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:29:00.657081    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:29:00.657091    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:29:00.657102    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:29:00.657942    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0307 10:29:00.657953    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0307 10:29:05.166903    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0307 10:29:05.166935    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0307 10:29:05.166942    7018 main.go:141] libmachine: (multinode-260000-m03) DBG | 2023/03/07 10:29:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0307 10:29:11.261985    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-260000-m03
	
	I0307 10:29:11.262003    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.262135    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:11.262237    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.262323    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.262404    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:11.262539    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:11.262858    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:11.262870    7018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-260000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-260000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-260000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:29:11.336626    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:29:11.336642    7018 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
	I0307 10:29:11.336650    7018 buildroot.go:174] setting up certificates
	I0307 10:29:11.336658    7018 provision.go:83] configureAuth start
	I0307 10:29:11.336666    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetMachineName
	I0307 10:29:11.336795    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
	I0307 10:29:11.336894    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.336973    7018 provision.go:138] copyHostCerts
	I0307 10:29:11.337009    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:29:11.337059    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
	I0307 10:29:11.337064    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:29:11.337174    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
	I0307 10:29:11.337363    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:29:11.337395    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
	I0307 10:29:11.337400    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:29:11.337460    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
	I0307 10:29:11.337578    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:29:11.337610    7018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
	I0307 10:29:11.337615    7018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:29:11.337670    7018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
	I0307 10:29:11.337789    7018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.multinode-260000-m03 san=[192.168.64.15 192.168.64.15 localhost 127.0.0.1 minikube multinode-260000-m03]
	I0307 10:29:11.427111    7018 provision.go:172] copyRemoteCerts
	I0307 10:29:11.427165    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:29:11.427179    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.427324    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:11.427419    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.427541    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:11.427623    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
	I0307 10:29:11.465606    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0307 10:29:11.465676    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 10:29:11.481351    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0307 10:29:11.481417    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0307 10:29:11.496933    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0307 10:29:11.496996    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 10:29:11.512347    7018 provision.go:86] duration metric: configureAuth took 175.680754ms
	I0307 10:29:11.512360    7018 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:29:11.512526    7018 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:29:11.512539    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:11.512663    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.512758    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:11.512840    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.512918    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.512998    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:11.513100    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:11.513391    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:11.513399    7018 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:29:11.579311    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:29:11.579323    7018 buildroot.go:70] root file system type: tmpfs
	I0307 10:29:11.579401    7018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:29:11.579411    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.579540    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:11.579641    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.579740    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.579829    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:11.579956    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:11.580270    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:11.580316    7018 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.64.12"
	Environment="NO_PROXY=192.168.64.12,192.168.64.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:29:11.652702    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.64.12
	Environment=NO_PROXY=192.168.64.12,192.168.64.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:29:11.652720    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:11.652848    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:11.652922    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.653006    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:11.653098    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:11.653250    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:11.653560    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:11.653573    7018 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:29:12.175360    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 10:29:12.175374    7018 machine.go:91] provisioned docker machine in 11.991002684s
	I0307 10:29:12.175381    7018 start.go:300] post-start starting for "multinode-260000-m03" (driver="hyperkit")
	I0307 10:29:12.175386    7018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:29:12.175396    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:12.175581    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:29:12.175596    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:12.175686    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:12.175759    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:12.175827    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:12.175912    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
	I0307 10:29:12.214369    7018 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:29:12.216755    7018 command_runner.go:130] > NAME=Buildroot
	I0307 10:29:12.216767    7018 command_runner.go:130] > VERSION=2021.02.12-1-gab7f370-dirty
	I0307 10:29:12.216773    7018 command_runner.go:130] > ID=buildroot
	I0307 10:29:12.216793    7018 command_runner.go:130] > VERSION_ID=2021.02.12
	I0307 10:29:12.216800    7018 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0307 10:29:12.216963    7018 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 10:29:12.216972    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
	I0307 10:29:12.217057    7018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
	I0307 10:29:12.217200    7018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
	I0307 10:29:12.217206    7018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> /etc/ssl/certs/39032.pem
	I0307 10:29:12.217370    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:29:12.223606    7018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:29:12.239878    7018 start.go:303] post-start completed in 64.487773ms
	I0307 10:29:12.239896    7018 fix.go:57] fixHost completed within 12.162546961s
	I0307 10:29:12.239910    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:12.240038    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:12.240131    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:12.240212    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:12.240290    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:12.240409    7018 main.go:141] libmachine: Using SSH client type: native
	I0307 10:29:12.240714    7018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.15 22 <nil> <nil>}
	I0307 10:29:12.240722    7018 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 10:29:12.305514    7018 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678213752.437212482
	
	I0307 10:29:12.305525    7018 fix.go:207] guest clock: 1678213752.437212482
	I0307 10:29:12.305531    7018 fix.go:220] Guest: 2023-03-07 10:29:12.437212482 -0800 PST Remote: 2023-03-07 10:29:12.239899 -0800 PST m=+114.574278242 (delta=197.313482ms)
	I0307 10:29:12.305540    7018 fix.go:191] guest clock delta is within tolerance: 197.313482ms
	I0307 10:29:12.305543    7018 start.go:83] releasing machines lock for "multinode-260000-m03", held for 12.228234634s
	I0307 10:29:12.305562    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:12.305681    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetIP
	I0307 10:29:12.327827    7018 out.go:177] * Found network options:
	I0307 10:29:12.349261    7018 out.go:177]   - NO_PROXY=192.168.64.12,192.168.64.13
	W0307 10:29:12.371206    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 10:29:12.371232    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 10:29:12.371252    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:12.372006    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:12.372213    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .DriverName
	I0307 10:29:12.372340    7018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 10:29:12.372393    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	W0307 10:29:12.372424    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 10:29:12.372448    7018 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 10:29:12.372546    7018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 10:29:12.372566    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHHostname
	I0307 10:29:12.372582    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:12.372778    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHPort
	I0307 10:29:12.372789    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:12.372944    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:12.372988    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHKeyPath
	I0307 10:29:12.373142    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
	I0307 10:29:12.373168    7018 main.go:141] libmachine: (multinode-260000-m03) Calling .GetSSHUsername
	I0307 10:29:12.373363    7018 sshutil.go:53] new ssh client: &{IP:192.168.64.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m03/id_rsa Username:docker}
	I0307 10:29:12.410014    7018 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0307 10:29:12.410159    7018 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:29:12.410222    7018 ssh_runner.go:195] Run: which cri-dockerd
	I0307 10:29:12.452473    7018 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 10:29:12.452552    7018 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 10:29:12.452679    7018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 10:29:12.459245    7018 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 10:29:12.470219    7018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 10:29:12.486201    7018 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0307 10:29:12.486242    7018 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 10:29:12.486250    7018 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:29:12.486346    7018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:29:12.502691    7018 command_runner.go:130] > kindest/kindnetd:v20230227-15197099
	I0307 10:29:12.502703    7018 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0307 10:29:12.502708    7018 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0307 10:29:12.502712    7018 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0307 10:29:12.502716    7018 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0307 10:29:12.502719    7018 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0307 10:29:12.502723    7018 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 10:29:12.502728    7018 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0307 10:29:12.502732    7018 command_runner.go:130] > registry.k8s.io/pause:3.6
	I0307 10:29:12.502737    7018 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 10:29:12.503864    7018 docker.go:630] Got preloaded images: -- stdout --
	kindest/kindnetd:v20230227-15197099
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 10:29:12.503874    7018 docker.go:560] Images already preloaded, skipping extraction
	I0307 10:29:12.503880    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:29:12.503940    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:29:12.523327    7018 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:29:12.523340    7018 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0307 10:29:12.524671    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 10:29:12.536597    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:29:12.544140    7018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:29:12.544193    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:29:12.550489    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:29:12.556842    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:29:12.563095    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:29:12.569445    7018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:29:12.575946    7018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:29:12.582556    7018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:29:12.588055    7018 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 10:29:12.588181    7018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:29:12.594025    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:29:12.673337    7018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:29:12.685510    7018 start.go:485] detecting cgroup driver to use...
	I0307 10:29:12.685584    7018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:29:12.695059    7018 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0307 10:29:12.696323    7018 command_runner.go:130] > [Unit]
	I0307 10:29:12.696352    7018 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 10:29:12.696362    7018 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 10:29:12.696367    7018 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0307 10:29:12.696371    7018 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0307 10:29:12.696375    7018 command_runner.go:130] > StartLimitBurst=3
	I0307 10:29:12.696382    7018 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 10:29:12.696388    7018 command_runner.go:130] > [Service]
	I0307 10:29:12.696393    7018 command_runner.go:130] > Type=notify
	I0307 10:29:12.696397    7018 command_runner.go:130] > Restart=on-failure
	I0307 10:29:12.696402    7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12
	I0307 10:29:12.696406    7018 command_runner.go:130] > Environment=NO_PROXY=192.168.64.12,192.168.64.13
	I0307 10:29:12.696413    7018 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 10:29:12.696422    7018 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 10:29:12.696428    7018 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 10:29:12.696433    7018 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 10:29:12.696439    7018 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 10:29:12.696445    7018 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 10:29:12.696454    7018 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 10:29:12.696462    7018 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 10:29:12.696468    7018 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 10:29:12.696471    7018 command_runner.go:130] > ExecStart=
	I0307 10:29:12.696485    7018 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0307 10:29:12.696489    7018 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 10:29:12.696497    7018 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 10:29:12.696503    7018 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 10:29:12.696506    7018 command_runner.go:130] > LimitNOFILE=infinity
	I0307 10:29:12.696510    7018 command_runner.go:130] > LimitNPROC=infinity
	I0307 10:29:12.696514    7018 command_runner.go:130] > LimitCORE=infinity
	I0307 10:29:12.696519    7018 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 10:29:12.696524    7018 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 10:29:12.696527    7018 command_runner.go:130] > TasksMax=infinity
	I0307 10:29:12.696531    7018 command_runner.go:130] > TimeoutStartSec=0
	I0307 10:29:12.696536    7018 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 10:29:12.696540    7018 command_runner.go:130] > Delegate=yes
	I0307 10:29:12.696549    7018 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 10:29:12.696553    7018 command_runner.go:130] > KillMode=process
	I0307 10:29:12.696557    7018 command_runner.go:130] > [Install]
	I0307 10:29:12.696562    7018 command_runner.go:130] > WantedBy=multi-user.target
	I0307 10:29:12.696635    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:29:12.705902    7018 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:29:12.738895    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:29:12.747844    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:29:12.756435    7018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 10:29:12.775075    7018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:29:12.783647    7018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:29:12.795348    7018 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:29:12.795358    7018 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 10:29:12.795646    7018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:29:12.877113    7018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:29:12.966218    7018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:29:12.966234    7018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 10:29:12.977829    7018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:29:13.058533    7018 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:30:14.087064    7018 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0307 10:30:14.087078    7018 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I0307 10:30:14.087168    7018 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.028339517s)
	I0307 10:30:14.108918    7018 out.go:177] 
	W0307 10:30:14.130829    7018 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0307 10:30:14.130853    7018 out.go:239] * 
	W0307 10:30:14.131956    7018 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:30:14.211985    7018 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-03-07 18:27:25 UTC, ends at Tue 2023-03-07 18:30:15 UTC. --
	Mar 07 18:28:29 multinode-260000 dockerd[823]: time="2023-03-07T18:28:29.553667299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 18:28:29 multinode-260000 dockerd[823]: time="2023-03-07T18:28:29.553718416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 18:28:29 multinode-260000 dockerd[823]: time="2023-03-07T18:28:29.553727844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 18:28:29 multinode-260000 dockerd[823]: time="2023-03-07T18:28:29.553859099Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b5c1d8a91fa2516e8c80365df84e3b130f3c1999b14147c7032297de307867f9 pid=2478 runtime=io.containerd.runc.v2
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.057242848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.057307249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.057316475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.057844466Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ac47899a738296394ded2ce0496525097adb38a6412d8fc94b3dce6877e8a33a pid=2668 runtime=io.containerd.runc.v2
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.175000064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.175197971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.175260153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.175483178Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b76d3e91590c9da6205b8d32d4b932be8104ea717355bd7711e406514dad7dd9 pid=2747 runtime=io.containerd.runc.v2
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.679769977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.679902993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.679926964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 18:28:30 multinode-260000 dockerd[823]: time="2023-03-07T18:28:30.680065305Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ae65d8b310bf85e2ffa8376af1ac87de2288952c06097e5f806c1db9cba7f352 pid=2836 runtime=io.containerd.runc.v2
	Mar 07 18:28:44 multinode-260000 dockerd[823]: time="2023-03-07T18:28:44.600236890Z" level=info msg="shim disconnected" id=fb55a8f7e7acf79ab5acef082e9687db3c86b8350d3822b8162a5264fa8a8737
	Mar 07 18:28:44 multinode-260000 dockerd[823]: time="2023-03-07T18:28:44.600660414Z" level=warning msg="cleaning up after shim disconnected" id=fb55a8f7e7acf79ab5acef082e9687db3c86b8350d3822b8162a5264fa8a8737 namespace=moby
	Mar 07 18:28:44 multinode-260000 dockerd[823]: time="2023-03-07T18:28:44.600692801Z" level=info msg="cleaning up dead shim"
	Mar 07 18:28:44 multinode-260000 dockerd[817]: time="2023-03-07T18:28:44.600890785Z" level=info msg="ignoring event" container=fb55a8f7e7acf79ab5acef082e9687db3c86b8350d3822b8162a5264fa8a8737 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 18:28:44 multinode-260000 dockerd[823]: time="2023-03-07T18:28:44.610196627Z" level=warning msg="cleanup warnings time=\"2023-03-07T18:28:44Z\" level=info msg=\"starting signal loop\" namespace=moby pid=3100 runtime=io.containerd.runc.v2\n"
	Mar 07 18:28:57 multinode-260000 dockerd[823]: time="2023-03-07T18:28:57.591630748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 18:28:57 multinode-260000 dockerd[823]: time="2023-03-07T18:28:57.591690709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 18:28:57 multinode-260000 dockerd[823]: time="2023-03-07T18:28:57.591700253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 18:28:57 multinode-260000 dockerd[823]: time="2023-03-07T18:28:57.592287539Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d7918bebc54af3eda49d9d26750f59cbb4123a06606515594c02386ec18084eb pid=3307 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	d7918bebc54af       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   195123dbe4fea
	ae65d8b310bf8       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   b76d3e91590c9
	ac47899a73829       5185b96f0becf                                                                                         About a minute ago   Running             coredns                   1                   b5c1d8a91fa25
	f4e367464e94a       bc00df424dcbf                                                                                         About a minute ago   Running             kindnet-cni               1                   fdbc154f16c5e
	b5a7ee396dc60       6f64e7135a6ec                                                                                         2 minutes ago        Running             kube-proxy                1                   f8fdeffee49cf
	fb55a8f7e7acf       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   195123dbe4fea
	26cf0a14d586f       db8f409d9a5d7                                                                                         2 minutes ago        Running             kube-scheduler            1                   2553a34510031
	84569585e5533       fce326961ae2d                                                                                         2 minutes ago        Running             etcd                      1                   07e789f3cc69b
	50c556c12dfe5       240e201d5b0d8                                                                                         2 minutes ago        Running             kube-controller-manager   1                   9c8e84f5ddfa3
	497af6d0e82e1       63d3239c3c159                                                                                         2 minutes ago        Running             kube-apiserver            1                   00bac04bca161
	efd9c03313ad9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 minutes ago       Exited              busybox                   0                   6c81d6df615b9
	da06b08e56174       5185b96f0becf                                                                                         11 minutes ago       Exited              coredns                   0                   5b66601ca9d1d
	37e6cf092e1c2       kindest/kindnetd@sha256:7fc2671641a1a7e7b9b8341964bd7cfe9018f497dc41d58803f88b0cc4030e07              11 minutes ago       Exited              kindnet-cni               0                   ae9d394ad7a79
	808d83da8d84b       6f64e7135a6ec                                                                                         11 minutes ago       Exited              kube-proxy                0                   1bf0ab9eb4c51
	2243964fbc4d2       240e201d5b0d8                                                                                         11 minutes ago       Exited              kube-controller-manager   0                   6ac51e9516a2e
	3b27eb7db4c28       fce326961ae2d                                                                                         11 minutes ago       Exited              etcd                      0                   cfcf920b73783
	10d167b9d9870       db8f409d9a5d7                                                                                         11 minutes ago       Exited              kube-scheduler            0                   aef4edf5b492f
	3e9b5dec9e21d       63d3239c3c159                                                                                         11 minutes ago       Exited              kube-apiserver            0                   0721a87b433b9
	
	* 
	* ==> coredns [ac47899a7382] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:38195 - 29344 "HINFO IN 6793254744361962333.4432823132512362091. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009534934s
	
	* 
	* ==> coredns [da06b08e5617] <==
	* [INFO] 10.244.0.3:49753 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000032613s
	[INFO] 10.244.0.3:59499 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078206s
	[INFO] 10.244.0.3:56252 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000031284s
	[INFO] 10.244.0.3:33352 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000029287s
	[INFO] 10.244.0.3:57361 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000030398s
	[INFO] 10.244.0.3:53316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000041736s
	[INFO] 10.244.0.3:43704 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000030091s
	[INFO] 10.244.1.2:35223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133692s
	[INFO] 10.244.1.2:37012 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060624s
	[INFO] 10.244.1.2:47740 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043499s
	[INFO] 10.244.1.2:46035 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059142s
	[INFO] 10.244.0.3:34318 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102436s
	[INFO] 10.244.0.3:55287 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046353s
	[INFO] 10.244.0.3:48922 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076473s
	[INFO] 10.244.0.3:58811 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000030082s
	[INFO] 10.244.1.2:51939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122385s
	[INFO] 10.244.1.2:51550 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000067643s
	[INFO] 10.244.1.2:46211 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061787s
	[INFO] 10.244.1.2:39798 - 5 "PTR IN 1.64.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011631s
	[INFO] 10.244.0.3:57108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000237714s
	[INFO] 10.244.0.3:59650 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101016s
	[INFO] 10.244.0.3:45286 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000054929s
	[INFO] 10.244.0.3:38551 - 5 "PTR IN 1.64.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085801s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-260000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-260000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=592b1e9939a898d806f69aad174a19c45f317df1
	                    minikube.k8s.io/name=multinode-260000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_07T10_18_30_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Mar 2023 18:18:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-260000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Mar 2023 18:30:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Mar 2023 18:28:23 +0000   Tue, 07 Mar 2023 18:18:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Mar 2023 18:28:23 +0000   Tue, 07 Mar 2023 18:18:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Mar 2023 18:28:23 +0000   Tue, 07 Mar 2023 18:18:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Mar 2023 18:28:23 +0000   Tue, 07 Mar 2023 18:28:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.12
	  Hostname:    multinode-260000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f1600c5bd3943459736c3eb945f3a86
	  System UUID:                608611ed-0000-0000-9c3c-149d997fca88
	  Boot ID:                    0f92c037-724f-4794-9137-e15efdc0756f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-tw9p8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-787d4945fb-x8m8v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-multinode-260000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-gfgwn                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-multinode-260000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-260000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-8qwhq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-multinode-260000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  Starting                 2m                   kube-proxy       
	  Normal  NodeHasSufficientPID     11m                  kubelet          Node multinode-260000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                  kubelet          Node multinode-260000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                  kubelet          Node multinode-260000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                  node-controller  Node multinode-260000 event: Registered Node multinode-260000 in Controller
	  Normal  NodeReady                11m                  kubelet          Node multinode-260000 status is now: NodeReady
	  Normal  Starting                 2m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node multinode-260000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node multinode-260000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m8s)  kubelet          Node multinode-260000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           110s                 node-controller  Node multinode-260000 event: Registered Node multinode-260000 in Controller
	
	
	Name:               multinode-260000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-260000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Mar 2023 18:28:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-260000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Mar 2023 18:30:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Mar 2023 18:28:57 +0000   Tue, 07 Mar 2023 18:28:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Mar 2023 18:28:57 +0000   Tue, 07 Mar 2023 18:28:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Mar 2023 18:28:57 +0000   Tue, 07 Mar 2023 18:28:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Mar 2023 18:28:57 +0000   Tue, 07 Mar 2023 18:28:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.13
	  Hostname:    multinode-260000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 55a0b35cd1c54e50a53cf57138dc4032
	  System UUID:                835411ed-0000-0000-9c3c-149d997fca88
	  Boot ID:                    d711e229-1b86-4d4e-835b-240f221511a4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-z6kqp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-proxy-pxshj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 81s                kube-proxy  
	  Normal  Starting                 10m                kube-proxy  
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-260000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-260000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-260000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kubelet     Starting kubelet.
	  Normal  NodeReady                10m                kubelet     Node multinode-260000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  85s (x2 over 85s)  kubelet     Node multinode-260000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x2 over 85s)  kubelet     Node multinode-260000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x2 over 85s)  kubelet     Node multinode-260000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 85s                kubelet     Starting kubelet.
	  Normal  NodeReady                79s                kubelet     Node multinode-260000-m02 status is now: NodeReady
	
	
	Name:               multinode-260000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-260000-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Mar 2023 18:26:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-260000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Mar 2023 18:26:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 07 Mar 2023 18:26:56 +0000   Tue, 07 Mar 2023 18:29:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 07 Mar 2023 18:26:56 +0000   Tue, 07 Mar 2023 18:29:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 07 Mar 2023 18:26:56 +0000   Tue, 07 Mar 2023 18:29:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 07 Mar 2023 18:26:56 +0000   Tue, 07 Mar 2023 18:29:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.64.15
	  Hostname:    multinode-260000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 9735f5acb6484418a3add414f55a7294
	  System UUID:                79b211ed-0000-0000-8f77-149d997fca88
	  Boot ID:                    861c3d48-b666-437f-8ed9-d8b1fb470f7a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-dxpfk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kindnet-j5gj9               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m10s
	  kube-system                 kube-proxy-q8cm8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  Starting                 3m25s                  kube-proxy       
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s (x2 over 4m11s)  kubelet          Node multinode-260000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x2 over 4m11s)  kubelet          Node multinode-260000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x2 over 4m11s)  kubelet          Node multinode-260000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m57s                  kubelet          Node multinode-260000-m03 status is now: NodeReady
	  Normal  Starting                 3m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m29s (x2 over 3m29s)  kubelet          Node multinode-260000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m29s (x2 over 3m29s)  kubelet          Node multinode-260000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m29s (x2 over 3m29s)  kubelet          Node multinode-260000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m20s                  kubelet          Node multinode-260000-m03 status is now: NodeReady
	  Normal  RegisteredNode           110s                   node-controller  Node multinode-260000-m03 event: Registered Node multinode-260000-m03 in Controller
	  Normal  NodeNotReady             70s                    node-controller  Node multinode-260000-m03 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.027546] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +4.610481] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006956] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.234653] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.038923] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.865054] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +26.325676] systemd-fstab-generator[519]: Ignoring "noauto" for root device
	[  +0.079667] systemd-fstab-generator[530]: Ignoring "noauto" for root device
	[  +0.829195] systemd-fstab-generator[748]: Ignoring "noauto" for root device
	[  +0.185897] systemd-fstab-generator[784]: Ignoring "noauto" for root device
	[  +0.081951] systemd-fstab-generator[795]: Ignoring "noauto" for root device
	[  +0.091850] systemd-fstab-generator[808]: Ignoring "noauto" for root device
	[  +1.339091] systemd-fstab-generator[964]: Ignoring "noauto" for root device
	[  +0.092219] systemd-fstab-generator[975]: Ignoring "noauto" for root device
	[  +0.090588] systemd-fstab-generator[986]: Ignoring "noauto" for root device
	[  +0.093612] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[Mar 7 18:28] systemd-fstab-generator[1233]: Ignoring "noauto" for root device
	[  +0.239907] kauditd_printk_skb: 67 callbacks suppressed
	[  +7.083787] kauditd_printk_skb: 8 callbacks suppressed
	[ +10.861594] kauditd_printk_skb: 16 callbacks suppressed
	
	* 
	* ==> etcd [3b27eb7db4c2] <==
	* {"level":"info","ts":"2023-03-07T18:18:23.584Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 is starting a new election at term 1"}
	{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgPreVoteResp from 893b0beac40933c0 at term 1"}
	{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became candidate at term 2"}
	{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgVoteResp from 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-03-07T18:18:24.257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became leader at term 2"}
	{"level":"info","ts":"2023-03-07T18:18:24.258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 893b0beac40933c0 elected leader 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-03-07T18:18:24.260Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:18:24.261Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"893b0beac40933c0","local-member-attributes":"{Name:multinode-260000 ClientURLs:[https://192.168.64.12:2379]}","request-path":"/0/members/893b0beac40933c0/attributes","cluster-id":"51ecae2d8304f353","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-07T18:18:24.261Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-07T18:18:24.262Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-07T18:18:24.262Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-07T18:18:24.263Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:18:24.281Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:18:24.281Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:18:24.263Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-07T18:18:24.281Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-07T18:18:24.283Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.12:2379"}
	{"level":"info","ts":"2023-03-07T18:26:59.661Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-03-07T18:26:59.661Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"multinode-260000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"]}
	{"level":"info","ts":"2023-03-07T18:26:59.676Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"893b0beac40933c0","current-leader-member-id":"893b0beac40933c0"}
	{"level":"info","ts":"2023-03-07T18:26:59.677Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-03-07T18:26:59.678Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-03-07T18:26:59.678Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"multinode-260000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"]}
	
	* 
	* ==> etcd [84569585e553] <==
	* {"level":"info","ts":"2023-03-07T18:28:10.548Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","added-peer-id":"893b0beac40933c0","added-peer-peer-urls":["https://192.168.64.12:2380"]}
	{"level":"info","ts":"2023-03-07T18:28:10.549Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:28:10.549Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-07T18:28:10.550Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-07T18:28:10.550Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"893b0beac40933c0","initial-advertise-peer-urls":["https://192.168.64.12:2380"],"listen-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.12:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-07T18:28:10.550Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-07T18:28:10.554Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-07T18:28:10.555Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-07T18:28:10.555Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-07T18:28:10.555Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-03-07T18:28:10.555Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 is starting a new election at term 2"}
	{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgPreVoteResp from 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became candidate at term 3"}
	{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgVoteResp from 893b0beac40933c0 at term 3"}
	{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became leader at term 3"}
	{"level":"info","ts":"2023-03-07T18:28:11.924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 893b0beac40933c0 elected leader 893b0beac40933c0 at term 3"}
	{"level":"info","ts":"2023-03-07T18:28:11.926Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"893b0beac40933c0","local-member-attributes":"{Name:multinode-260000 ClientURLs:[https://192.168.64.12:2379]}","request-path":"/0/members/893b0beac40933c0/attributes","cluster-id":"51ecae2d8304f353","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-07T18:28:11.926Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-07T18:28:11.927Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-07T18:28:11.927Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-07T18:28:11.926Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-07T18:28:11.929Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-07T18:28:11.932Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.12:2379"}
	
	* 
	* ==> kernel <==
	*  18:30:16 up 2 min,  0 users,  load average: 0.12, 0.06, 0.01
	Linux multinode-260000 5.10.57 #1 SMP Fri Feb 24 23:00:41 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [37e6cf092e1c] <==
	* I0307 18:26:28.447090       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0307 18:26:28.447127       1 main.go:227] handling current node
	I0307 18:26:28.447136       1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
	I0307 18:26:28.447141       1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24] 
	I0307 18:26:28.447369       1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
	I0307 18:26:28.447402       1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.2.0/24] 
	I0307 18:26:38.451649       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0307 18:26:38.451684       1 main.go:227] handling current node
	I0307 18:26:38.451692       1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
	I0307 18:26:38.451696       1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24] 
	I0307 18:26:38.451779       1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
	I0307 18:26:38.451806       1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.2.0/24] 
	I0307 18:26:48.456938       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0307 18:26:48.457106       1 main.go:227] handling current node
	I0307 18:26:48.457237       1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
	I0307 18:26:48.457326       1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24] 
	I0307 18:26:48.457646       1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
	I0307 18:26:48.457815       1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24] 
	I0307 18:26:48.457898       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.64.15 Flags: [] Table: 0} 
	I0307 18:26:58.466487       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0307 18:26:58.466647       1 main.go:227] handling current node
	I0307 18:26:58.466702       1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
	I0307 18:26:58.466828       1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24] 
	I0307 18:26:58.466977       1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
	I0307 18:26:58.467105       1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kindnet [f4e367464e94] <==
	* I0307 18:29:27.914842       1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24] 
	I0307 18:29:37.926398       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0307 18:29:37.926411       1 main.go:227] handling current node
	I0307 18:29:37.926418       1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
	I0307 18:29:37.926421       1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24] 
	I0307 18:29:37.926480       1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
	I0307 18:29:37.926485       1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24] 
	I0307 18:29:47.934496       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0307 18:29:47.934564       1 main.go:227] handling current node
	I0307 18:29:47.934612       1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
	I0307 18:29:47.934691       1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24] 
	I0307 18:29:47.934816       1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
	I0307 18:29:47.934920       1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24] 
	I0307 18:29:57.940656       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0307 18:29:57.940856       1 main.go:227] handling current node
	I0307 18:29:57.940976       1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
	I0307 18:29:57.941042       1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24] 
	I0307 18:29:57.941226       1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
	I0307 18:29:57.941331       1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24] 
	I0307 18:30:07.945187       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0307 18:30:07.945222       1 main.go:227] handling current node
	I0307 18:30:07.945230       1 main.go:223] Handling node with IPs: map[192.168.64.13:{}]
	I0307 18:30:07.945235       1 main.go:250] Node multinode-260000-m02 has CIDR [10.244.1.0/24] 
	I0307 18:30:07.945510       1 main.go:223] Handling node with IPs: map[192.168.64.15:{}]
	I0307 18:30:07.945735       1 main.go:250] Node multinode-260000-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [3e9b5dec9e21] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0307 18:26:59.674314       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0307 18:26:59.674348       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0307 18:26:59.674380       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [497af6d0e82e] <==
	* I0307 18:28:13.110130       1 establishing_controller.go:76] Starting EstablishingController
	I0307 18:28:13.110138       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0307 18:28:13.110297       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0307 18:28:13.110308       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0307 18:28:13.110418       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0307 18:28:13.110677       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0307 18:28:13.216868       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0307 18:28:13.222242       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0307 18:28:13.225197       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0307 18:28:13.291153       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0307 18:28:13.292250       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0307 18:28:13.292257       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0307 18:28:13.292505       1 cache.go:39] Caches are synced for autoregister controller
	I0307 18:28:13.294755       1 shared_informer.go:280] Caches are synced for configmaps
	I0307 18:28:13.300044       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0307 18:28:13.300090       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0307 18:28:13.906724       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0307 18:28:14.098838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0307 18:28:15.623589       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0307 18:28:15.731369       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0307 18:28:15.739807       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0307 18:28:15.781083       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 18:28:15.785959       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0307 18:28:26.361442       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 18:28:26.382833       1 controller.go:615] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [2243964fbc4d] <==
	* I0307 18:18:51.296089       1 node_lifecycle_controller.go:1231] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0307 18:19:13.246738       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-260000-m02" does not exist
	I0307 18:19:13.257767       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pxshj"
	I0307 18:19:13.263613       1 range_allocator.go:372] Set node multinode-260000-m02 PodCIDR to [10.244.1.0/24]
	I0307 18:19:13.263757       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-z6kqp"
	W0307 18:19:16.299141       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-260000-m02. Assuming now as a timestamp.
	I0307 18:19:16.299544       1 event.go:294] "Event occurred" object="multinode-260000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-260000-m02 event: Registered Node multinode-260000-m02 in Controller"
	W0307 18:19:26.716043       1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
	I0307 18:19:28.956380       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0307 18:19:28.986793       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-dmrds"
	I0307 18:19:28.992627       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-tw9p8"
	I0307 18:19:31.308952       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-dmrds" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-dmrds"
	W0307 18:26:06.133983       1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
	W0307 18:26:06.134312       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-260000-m03" does not exist
	I0307 18:26:06.140204       1 range_allocator.go:372] Set node multinode-260000-m03 PodCIDR to [10.244.2.0/24]
	I0307 18:26:06.145606       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j5gj9"
	I0307 18:26:06.155170       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q8cm8"
	W0307 18:26:06.393676       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-260000-m03. Assuming now as a timestamp.
	I0307 18:26:06.393890       1 event.go:294] "Event occurred" object="multinode-260000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-260000-m03 event: Registered Node multinode-260000-m03 in Controller"
	W0307 18:26:19.558458       1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
	W0307 18:26:47.345943       1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
	W0307 18:26:48.162669       1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
	W0307 18:26:48.162859       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-260000-m03" does not exist
	I0307 18:26:48.168066       1 range_allocator.go:372] Set node multinode-260000-m03 PodCIDR to [10.244.3.0/24]
	W0307 18:26:56.666685       1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m03 node
	
	* 
	* ==> kube-controller-manager [50c556c12dfe] <==
	* I0307 18:28:26.349894       1 shared_informer.go:280] Caches are synced for deployment
	I0307 18:28:26.352648       1 shared_informer.go:280] Caches are synced for ReplicaSet
	I0307 18:28:26.355935       1 shared_informer.go:280] Caches are synced for disruption
	I0307 18:28:26.360793       1 shared_informer.go:280] Caches are synced for ephemeral
	I0307 18:28:26.363962       1 shared_informer.go:280] Caches are synced for daemon sets
	I0307 18:28:26.364187       1 shared_informer.go:280] Caches are synced for GC
	I0307 18:28:26.369119       1 shared_informer.go:280] Caches are synced for endpoint
	I0307 18:28:26.372783       1 shared_informer.go:280] Caches are synced for PVC protection
	I0307 18:28:26.385561       1 shared_informer.go:280] Caches are synced for stateful set
	I0307 18:28:26.395694       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0307 18:28:26.443107       1 shared_informer.go:280] Caches are synced for persistent volume
	I0307 18:28:26.783215       1 shared_informer.go:280] Caches are synced for garbage collector
	I0307 18:28:26.783336       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0307 18:28:26.789489       1 shared_informer.go:280] Caches are synced for garbage collector
	I0307 18:28:48.464881       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-dxpfk"
	W0307 18:28:51.475477       1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m03 node
	W0307 18:28:52.260934       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-260000-m02" does not exist
	W0307 18:28:52.261307       1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m03 node
	I0307 18:28:52.266304       1 range_allocator.go:372] Set node multinode-260000-m02 PodCIDR to [10.244.1.0/24]
	W0307 18:28:57.920623       1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
	I0307 18:29:01.349195       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-dmrds" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-dmrds"
	I0307 18:29:06.355165       1 event.go:294] "Event occurred" object="multinode-260000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-260000-m03 status is now: NodeNotReady"
	W0307 18:29:06.355726       1 topologycache.go:232] Can't get CPU or zone information for multinode-260000-m02 node
	I0307 18:29:06.363745       1 event.go:294] "Event occurred" object="kube-system/kindnet-j5gj9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0307 18:29:06.369443       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-q8cm8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [808d83da8d84] <==
	* I0307 18:18:42.494750       1 node.go:163] Successfully retrieved node IP: 192.168.64.12
	I0307 18:18:42.494821       1 server_others.go:109] "Detected node IP" address="192.168.64.12"
	I0307 18:18:42.494837       1 server_others.go:535] "Using iptables proxy"
	I0307 18:18:42.540347       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0307 18:18:42.540362       1 server_others.go:176] "Using iptables Proxier"
	I0307 18:18:42.540384       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 18:18:42.540579       1 server.go:655] "Version info" version="v1.26.2"
	I0307 18:18:42.540586       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 18:18:42.542139       1 config.go:317] "Starting service config controller"
	I0307 18:18:42.542152       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0307 18:18:42.542165       1 config.go:226] "Starting endpoint slice config controller"
	I0307 18:18:42.542168       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0307 18:18:42.542805       1 config.go:444] "Starting node config controller"
	I0307 18:18:42.542810       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0307 18:18:42.642774       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0307 18:18:42.642783       1 shared_informer.go:280] Caches are synced for service config
	I0307 18:18:42.642913       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-proxy [b5a7ee396dc6] <==
	* I0307 18:28:15.472153       1 node.go:163] Successfully retrieved node IP: 192.168.64.12
	I0307 18:28:15.472708       1 server_others.go:109] "Detected node IP" address="192.168.64.12"
	I0307 18:28:15.472783       1 server_others.go:535] "Using iptables proxy"
	I0307 18:28:15.556618       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0307 18:28:15.556713       1 server_others.go:176] "Using iptables Proxier"
	I0307 18:28:15.557566       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 18:28:15.558340       1 server.go:655] "Version info" version="v1.26.2"
	I0307 18:28:15.558371       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 18:28:15.560891       1 config.go:317] "Starting service config controller"
	I0307 18:28:15.561767       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0307 18:28:15.561822       1 config.go:226] "Starting endpoint slice config controller"
	I0307 18:28:15.561848       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0307 18:28:15.565094       1 config.go:444] "Starting node config controller"
	I0307 18:28:15.565123       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0307 18:28:15.662569       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0307 18:28:15.662641       1 shared_informer.go:280] Caches are synced for service config
	I0307 18:28:15.665665       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [10d167b9d987] <==
	* E0307 18:18:25.562466       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0307 18:18:25.562560       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 18:18:25.562674       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0307 18:18:25.562782       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 18:18:25.562847       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0307 18:18:25.563105       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0307 18:18:25.563202       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0307 18:18:25.563628       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 18:18:25.563744       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0307 18:18:26.434289       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 18:18:26.434408       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0307 18:18:26.442009       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 18:18:26.442097       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 18:18:26.456512       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 18:18:26.456552       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0307 18:18:26.488229       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 18:18:26.488359       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0307 18:18:26.563741       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 18:18:26.564207       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 18:18:26.667408       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 18:18:26.667448       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0307 18:18:26.955242       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 18:26:59.644696       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0307 18:26:59.645252       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0307 18:26:59.645273       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [26cf0a14d586] <==
	* I0307 18:28:11.367805       1 serving.go:348] Generated self-signed cert in-memory
	W0307 18:28:13.150328       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0307 18:28:13.150362       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 18:28:13.150371       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 18:28:13.150376       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 18:28:13.235320       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.2"
	I0307 18:28:13.235372       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 18:28:13.239708       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0307 18:28:13.239806       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0307 18:28:13.241227       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 18:28:13.242006       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0307 18:28:13.341779       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-03-07 18:27:25 UTC, ends at Tue 2023-03-07 18:30:17 UTC. --
	Mar 07 18:28:17 multinode-260000 kubelet[1239]: E0307 18:28:17.157728    1239 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 07 18:28:17 multinode-260000 kubelet[1239]: E0307 18:28:17.157918    1239 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6-config-volume podName:c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6 nodeName:}" failed. No retries permitted until 2023-03-07 18:28:21.157899363 +0000 UTC m=+12.844186086 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6-config-volume") pod "coredns-787d4945fb-x8m8v" (UID: "c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6") : object "kube-system"/"coredns" not registered
	Mar 07 18:28:17 multinode-260000 kubelet[1239]: I0307 18:28:17.243165    1239 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdbc154f16c5ecae8e4cd1c88a503e2325c5dd83beb13fd64441377c4f9e7ec0"
	Mar 07 18:28:17 multinode-260000 kubelet[1239]: E0307 18:28:17.862532    1239 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Mar 07 18:28:17 multinode-260000 kubelet[1239]: E0307 18:28:17.862660    1239 projected.go:198] Error preparing data for projected volume kube-api-access-qh9hd for pod default/busybox-6b86dd6d48-tw9p8: object "default"/"kube-root-ca.crt" not registered
	Mar 07 18:28:17 multinode-260000 kubelet[1239]: E0307 18:28:17.862775    1239 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00822b5c-f30a-4e57-9efd-48e3cae67dd8-kube-api-access-qh9hd podName:00822b5c-f30a-4e57-9efd-48e3cae67dd8 nodeName:}" failed. No retries permitted until 2023-03-07 18:28:21.862765021 +0000 UTC m=+13.549051734 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qh9hd" (UniqueName: "kubernetes.io/projected/00822b5c-f30a-4e57-9efd-48e3cae67dd8-kube-api-access-qh9hd") pod "busybox-6b86dd6d48-tw9p8" (UID: "00822b5c-f30a-4e57-9efd-48e3cae67dd8") : object "default"/"kube-root-ca.crt" not registered
	Mar 07 18:28:18 multinode-260000 kubelet[1239]: E0307 18:28:18.278405    1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-x8m8v" podUID=c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6
	Mar 07 18:28:18 multinode-260000 kubelet[1239]: E0307 18:28:18.279104    1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-tw9p8" podUID=00822b5c-f30a-4e57-9efd-48e3cae67dd8
	Mar 07 18:28:18 multinode-260000 kubelet[1239]: E0307 18:28:18.551174    1239 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Mar 07 18:28:19 multinode-260000 kubelet[1239]: E0307 18:28:19.553176    1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-x8m8v" podUID=c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6
	Mar 07 18:28:19 multinode-260000 kubelet[1239]: E0307 18:28:19.553320    1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-tw9p8" podUID=00822b5c-f30a-4e57-9efd-48e3cae67dd8
	Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.191160    1239 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.191491    1239 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6-config-volume podName:c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6 nodeName:}" failed. No retries permitted until 2023-03-07 18:28:29.191480722 +0000 UTC m=+20.877767432 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6-config-volume") pod "coredns-787d4945fb-x8m8v" (UID: "c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6") : object "kube-system"/"coredns" not registered
	Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.552920    1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-x8m8v" podUID=c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6
	Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.553059    1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-tw9p8" podUID=00822b5c-f30a-4e57-9efd-48e3cae67dd8
	Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.897966    1239 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.898129    1239 projected.go:198] Error preparing data for projected volume kube-api-access-qh9hd for pod default/busybox-6b86dd6d48-tw9p8: object "default"/"kube-root-ca.crt" not registered
	Mar 07 18:28:21 multinode-260000 kubelet[1239]: E0307 18:28:21.898305    1239 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00822b5c-f30a-4e57-9efd-48e3cae67dd8-kube-api-access-qh9hd podName:00822b5c-f30a-4e57-9efd-48e3cae67dd8 nodeName:}" failed. No retries permitted until 2023-03-07 18:28:29.898286181 +0000 UTC m=+21.584572909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qh9hd" (UniqueName: "kubernetes.io/projected/00822b5c-f30a-4e57-9efd-48e3cae67dd8-kube-api-access-qh9hd") pod "busybox-6b86dd6d48-tw9p8" (UID: "00822b5c-f30a-4e57-9efd-48e3cae67dd8") : object "default"/"kube-root-ca.crt" not registered
	Mar 07 18:28:23 multinode-260000 kubelet[1239]: E0307 18:28:23.554580    1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-787d4945fb-x8m8v" podUID=c3cdad54-bad8-4a77-a822-c4bc5c8dc1b6
	Mar 07 18:28:23 multinode-260000 kubelet[1239]: E0307 18:28:23.554943    1239 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-6b86dd6d48-tw9p8" podUID=00822b5c-f30a-4e57-9efd-48e3cae67dd8
	Mar 07 18:28:30 multinode-260000 kubelet[1239]: I0307 18:28:30.592014    1239 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76d3e91590c9da6205b8d32d4b932be8104ea717355bd7711e406514dad7dd9"
	Mar 07 18:28:44 multinode-260000 kubelet[1239]: I0307 18:28:44.723203    1239 scope.go:115] "RemoveContainer" containerID="c4559ff3518da6f34f4cdc748b8c7c12071cc25ff90faaec5b6ea9e714e7aba4"
	Mar 07 18:28:44 multinode-260000 kubelet[1239]: I0307 18:28:44.723439    1239 scope.go:115] "RemoveContainer" containerID="fb55a8f7e7acf79ab5acef082e9687db3c86b8350d3822b8162a5264fa8a8737"
	Mar 07 18:28:44 multinode-260000 kubelet[1239]: E0307 18:28:44.723568    1239 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(0b88c317-8e90-4927-b4f8-cae5597b5dc8)\"" pod="kube-system/storage-provisioner" podUID=0b88c317-8e90-4927-b4f8-cae5597b5dc8
	Mar 07 18:28:57 multinode-260000 kubelet[1239]: I0307 18:28:57.552994    1239 scope.go:115] "RemoveContainer" containerID="fb55a8f7e7acf79ab5acef082e9687db3c86b8350d3822b8162a5264fa8a8737"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-260000 -n multinode-260000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-260000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-6b86dd6d48-dxpfk
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-260000 describe pod busybox-6b86dd6d48-dxpfk
helpers_test.go:282: (dbg) kubectl --context multinode-260000 describe pod busybox-6b86dd6d48-dxpfk:

                                                
                                                
-- stdout --
	Name:             busybox-6b86dd6d48-dxpfk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             multinode-260000-m03/
	Labels:           app=busybox
	                  pod-template-hash=6b86dd6d48
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-6b86dd6d48
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rqb2s (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-rqb2s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  89s   default-scheduler  Successfully assigned default/busybox-6b86dd6d48-dxpfk to multinode-260000-m03

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (198.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (116.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.3698172508.exe start -p running-upgrade-359000 --memory=2200 --vm-driver=hyperkit 
E0307 10:42:21.468032    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:43:08.342047    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.3698172508.exe start -p running-upgrade-359000 --memory=2200 --vm-driver=hyperkit : (1m36.998488932s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-359000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:138: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p running-upgrade-359000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (15.326100352s)

                                                
                                                
-- stdout --
	* [running-upgrade-359000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
	* Using the hyperkit driver based on existing profile
	* Starting control plane node running-upgrade-359000 in cluster running-upgrade-359000
	* Updating the running hyperkit "running-upgrade-359000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:43:32.932758    8980 out.go:296] Setting OutFile to fd 1 ...
	I0307 10:43:32.933465    8980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:43:32.933473    8980 out.go:309] Setting ErrFile to fd 2...
	I0307 10:43:32.933480    8980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:43:32.933745    8980 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
	I0307 10:43:32.935610    8980 out.go:303] Setting JSON to false
	I0307 10:43:32.954842    8980 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4387,"bootTime":1678210225,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:43:32.954952    8980 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0307 10:43:32.976930    8980 out.go:177] * [running-upgrade-359000] minikube v1.29.0 on Darwin 13.2.1
	I0307 10:43:33.035163    8980 notify.go:220] Checking for updates...
	I0307 10:43:33.072910    8980 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 10:43:33.114878    8980 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:43:33.135906    8980 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:43:33.156894    8980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:43:33.198889    8980 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	I0307 10:43:33.220003    8980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:43:33.241701    8980 config.go:182] Loaded profile config "running-upgrade-359000": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0307 10:43:33.241736    8980 start_flags.go:687] config upgrade: Driver=hyperkit
	I0307 10:43:33.241750    8980 start_flags.go:699] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9
	I0307 10:43:33.241885    8980 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/running-upgrade-359000/config.json ...
	I0307 10:43:33.243217    8980 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:43:33.243275    8980 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:43:33.250556    8980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52607
	I0307 10:43:33.250880    8980 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:43:33.251297    8980 main.go:141] libmachine: Using API Version  1
	I0307 10:43:33.251312    8980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:43:33.251633    8980 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:43:33.251792    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .DriverName
	I0307 10:43:33.273102    8980 out.go:177] * Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
	I0307 10:43:33.294728    8980 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 10:43:33.295057    8980 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:43:33.295083    8980 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:43:33.301996    8980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52609
	I0307 10:43:33.302331    8980 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:43:33.302713    8980 main.go:141] libmachine: Using API Version  1
	I0307 10:43:33.302728    8980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:43:33.302939    8980 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:43:33.303030    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .DriverName
	I0307 10:43:33.351900    8980 out.go:177] * Using the hyperkit driver based on existing profile
	I0307 10:43:33.372927    8980 start.go:296] selected driver: hyperkit
	I0307 10:43:33.372955    8980 start.go:857] validating driver "hyperkit" against &{Name:running-upgrade-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v
1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.26 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP:}
	I0307 10:43:33.373129    8980 start.go:868] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:43:33.376896    8980 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:43:33.377013    8980 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15985-3430/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0307 10:43:33.383724    8980 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.29.0
	I0307 10:43:33.386905    8980 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:43:33.386923    8980 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0307 10:43:33.387009    8980 cni.go:84] Creating CNI manager for ""
	I0307 10:43:33.387026    8980 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0307 10:43:33.387034    8980 start_flags.go:319] config:
	{Name:running-upgrade-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.26 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 10:43:33.387144    8980 iso.go:125] acquiring lock: {Name:mk7e0ac9e85418e0580033b84b7097185a725e89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:43:33.424916    8980 out.go:177] * Starting control plane node running-upgrade-359000 in cluster running-upgrade-359000
	I0307 10:43:33.462101    8980 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0307 10:43:33.582300    8980 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0307 10:43:33.582442    8980 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/running-upgrade-359000/config.json ...
	I0307 10:43:33.582607    8980 cache.go:107] acquiring lock: {Name:mk3de1a4b2f8657460ce7e426d2000ec664d1e22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:43:33.582628    8980 cache.go:107] acquiring lock: {Name:mk71fa6ddb70d8b6d64a9edfa611cccb9aa2b543 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:43:33.582658    8980 cache.go:107] acquiring lock: {Name:mk99a42854798d0a14215c9f87dbf10142e0cd83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:43:33.582604    8980 cache.go:107] acquiring lock: {Name:mk643addc8786a70b7d1a68e4e2918ed283a2830 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:43:33.582733    8980 cache.go:107] acquiring lock: {Name:mk095f08326069c7bfea6a0a732ed07ee738970e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:43:33.582853    8980 cache.go:107] acquiring lock: {Name:mk8becfe1d92404494f6ad3afbda744f0d03e851 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:43:33.582886    8980 cache.go:107] acquiring lock: {Name:mkd016103c20631968468118382bc2bcd4f0c536 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:43:33.582921    8980 cache.go:107] acquiring lock: {Name:mke2b9613300cec6162ecadadb60bfc62ce1026c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:43:33.583001    8980 cache.go:115] /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 10:43:33.583033    8980 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 440.377µs
	I0307 10:43:33.583063    8980 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 10:43:33.583039    8980 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0307 10:43:33.583092    8980 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0307 10:43:33.583094    8980 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0307 10:43:33.583186    8980 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0307 10:43:33.583242    8980 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0307 10:43:33.583353    8980 cache.go:193] Successfully downloaded all kic artifacts
	I0307 10:43:33.583414    8980 start.go:364] acquiring machines lock for running-upgrade-359000: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:43:33.583475    8980 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0307 10:43:33.583461    8980 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0307 10:43:33.583538    8980 start.go:368] acquired machines lock for "running-upgrade-359000" in 99.813µs
	I0307 10:43:33.583604    8980 start.go:96] Skipping create...Using existing machine configuration
	I0307 10:43:33.583626    8980 fix.go:55] fixHost starting: minikube
	I0307 10:43:33.584129    8980 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:43:33.584163    8980 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:43:33.595223    8980 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0307 10:43:33.596338    8980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52611
	I0307 10:43:33.596578    8980 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0307 10:43:33.597309    8980 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:43:33.598223    8980 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0307 10:43:33.598590    8980 main.go:141] libmachine: Using API Version  1
	I0307 10:43:33.598611    8980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:43:33.598873    8980 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:43:33.599095    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .DriverName
	I0307 10:43:33.599237    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetState
	I0307 10:43:33.599356    8980 main.go:141] libmachine: (running-upgrade-359000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:43:33.599488    8980 main.go:141] libmachine: (running-upgrade-359000) DBG | hyperkit pid from json: 8816
	I0307 10:43:33.599522    8980 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0307 10:43:33.601024    8980 fix.go:103] recreateIfNeeded on running-upgrade-359000: state=Running err=<nil>
	W0307 10:43:33.601051    8980 fix.go:129] unexpected machine state, will restart: <nil>
	I0307 10:43:33.601116    8980 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0307 10:43:33.625860    8980 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0307 10:43:33.661012    8980 out.go:177] * Updating the running hyperkit "running-upgrade-359000" VM ...
	I0307 10:43:33.661302    8980 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0307 10:43:33.697969    8980 machine.go:88] provisioning docker machine ...
	I0307 10:43:33.697990    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .DriverName
	I0307 10:43:33.698228    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetMachineName
	I0307 10:43:33.698378    8980 buildroot.go:166] provisioning hostname "running-upgrade-359000"
	I0307 10:43:33.698392    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetMachineName
	I0307 10:43:33.698528    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:33.698633    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHPort
	I0307 10:43:33.698735    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:33.698833    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:33.698928    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHUsername
	I0307 10:43:33.699058    8980 main.go:141] libmachine: Using SSH client type: native
	I0307 10:43:33.699444    8980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.26 22 <nil> <nil>}
	I0307 10:43:33.699454    8980 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-359000 && echo "running-upgrade-359000" | sudo tee /etc/hostname
	I0307 10:43:33.781459    8980 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-359000
	
	I0307 10:43:33.781486    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:33.781633    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHPort
	I0307 10:43:33.781743    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:33.781853    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:33.781951    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHUsername
	I0307 10:43:33.782081    8980 main.go:141] libmachine: Using SSH client type: native
	I0307 10:43:33.782394    8980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.26 22 <nil> <nil>}
	I0307 10:43:33.782407    8980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-359000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-359000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-359000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:43:33.860580    8980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:43:33.860603    8980 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
	I0307 10:43:33.860620    8980 buildroot.go:174] setting up certificates
	I0307 10:43:33.860637    8980 provision.go:83] configureAuth start
	I0307 10:43:33.860650    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetMachineName
	I0307 10:43:33.860801    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetIP
	I0307 10:43:33.860920    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:33.861032    8980 provision.go:138] copyHostCerts
	I0307 10:43:33.861113    8980 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
	I0307 10:43:33.861122    8980 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:43:33.861262    8980 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
	I0307 10:43:33.861478    8980 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
	I0307 10:43:33.861485    8980 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:43:33.861546    8980 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
	I0307 10:43:33.861702    8980 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
	I0307 10:43:33.861708    8980 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:43:33.861769    8980 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
	I0307 10:43:33.861901    8980 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-359000 san=[192.168.64.26 192.168.64.26 localhost 127.0.0.1 minikube running-upgrade-359000]
	I0307 10:43:33.973528    8980 provision.go:172] copyRemoteCerts
	I0307 10:43:33.973605    8980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:43:33.973631    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:33.973794    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHPort
	I0307 10:43:33.973883    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:33.973987    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHUsername
	I0307 10:43:33.974079    8980 sshutil.go:53] new ssh client: &{IP:192.168.64.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	I0307 10:43:34.016932    8980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 10:43:34.026491    8980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0307 10:43:34.036303    8980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 10:43:34.045448    8980 provision.go:86] duration metric: configureAuth took 184.792981ms
	I0307 10:43:34.045462    8980 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:43:34.045597    8980 config.go:182] Loaded profile config "running-upgrade-359000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0307 10:43:34.045618    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .DriverName
	I0307 10:43:34.045788    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:34.045880    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHPort
	I0307 10:43:34.045984    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:34.046091    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:34.046178    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHUsername
	I0307 10:43:34.046281    8980 main.go:141] libmachine: Using SSH client type: native
	I0307 10:43:34.046584    8980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.26 22 <nil> <nil>}
	I0307 10:43:34.046592    8980 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:43:34.125452    8980 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:43:34.125463    8980 buildroot.go:70] root file system type: tmpfs
	I0307 10:43:34.125556    8980 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:43:34.125571    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:34.125703    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHPort
	I0307 10:43:34.125787    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:34.125903    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:34.126008    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHUsername
	I0307 10:43:34.126142    8980 main.go:141] libmachine: Using SSH client type: native
	I0307 10:43:34.126449    8980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.26 22 <nil> <nil>}
	I0307 10:43:34.126494    8980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:43:34.208814    8980 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:43:34.208849    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:34.209010    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHPort
	I0307 10:43:34.209116    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:34.209205    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:34.209311    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHUsername
	I0307 10:43:34.209430    8980 main.go:141] libmachine: Using SSH client type: native
	I0307 10:43:34.209766    8980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.26 22 <nil> <nil>}
	I0307 10:43:34.209779    8980 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:43:34.789197    8980 cache.go:162] opening:  /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0307 10:43:34.804523    8980 cache.go:162] opening:  /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0307 10:43:35.097056    8980 cache.go:162] opening:  /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0307 10:43:35.333545    8980 cache.go:162] opening:  /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0307 10:43:35.693753    8980 cache.go:157] /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0307 10:43:35.693773    8980 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 2.111086791s
	I0307 10:43:35.693784    8980 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0307 10:43:35.740740    8980 cache.go:162] opening:  /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0307 10:43:35.871712    8980 cache.go:162] opening:  /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0307 10:43:36.049664    8980 cache.go:157] /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0307 10:43:36.049681    8980 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 2.466892538s
	I0307 10:43:36.049690    8980 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0307 10:43:36.131123    8980 cache.go:162] opening:  /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0307 10:43:38.782933    8980 cache.go:157] /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0307 10:43:38.782948    8980 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 5.200329772s
	I0307 10:43:38.782959    8980 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0307 10:43:38.904420    8980 cache.go:157] /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0307 10:43:38.904441    8980 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 5.32160794s
	I0307 10:43:38.904449    8980 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0307 10:43:39.225672    8980 cache.go:157] /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0307 10:43:39.225689    8980 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 5.643057599s
	I0307 10:43:39.225697    8980 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0307 10:43:39.833182    8980 cache.go:157] /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0307 10:43:39.833196    8980 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 6.250517895s
	I0307 10:43:39.833205    8980 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0307 10:43:40.006249    8980 cache.go:157] /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0307 10:43:40.006264    8980 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 6.423364774s
	I0307 10:43:40.006272    8980 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0307 10:43:40.006283    8980 cache.go:87] Successfully saved all images to host disk.
	I0307 10:43:45.910616    8980 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 10:43:45.910638    8980 machine.go:91] provisioned docker machine in 12.21260066s
	I0307 10:43:45.910649    8980 start.go:300] post-start starting for "running-upgrade-359000" (driver="hyperkit")
	I0307 10:43:45.910654    8980 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:43:45.910665    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .DriverName
	I0307 10:43:45.910863    8980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:43:45.910877    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:45.910959    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHPort
	I0307 10:43:45.911047    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:45.911139    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHUsername
	I0307 10:43:45.911230    8980 sshutil.go:53] new ssh client: &{IP:192.168.64.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	I0307 10:43:45.957948    8980 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:43:45.960638    8980 info.go:137] Remote host: Buildroot 2019.02.7
	I0307 10:43:45.960651    8980 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
	I0307 10:43:45.960741    8980 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
	I0307 10:43:45.960890    8980 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
	I0307 10:43:45.961062    8980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:43:45.968116    8980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:43:45.988687    8980 start.go:303] post-start completed in 78.026296ms
	I0307 10:43:45.988706    8980 fix.go:57] fixHost completed within 12.405028687s
	I0307 10:43:45.988724    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:45.988870    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHPort
	I0307 10:43:45.988966    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:45.989057    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:45.989155    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHUsername
	I0307 10:43:45.989302    8980 main.go:141] libmachine: Using SSH client type: native
	I0307 10:43:45.989624    8980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.26 22 <nil> <nil>}
	I0307 10:43:45.989633    8980 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 10:43:46.071379    8980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678214626.241513799
	
	I0307 10:43:46.071392    8980 fix.go:207] guest clock: 1678214626.241513799
	I0307 10:43:46.071397    8980 fix.go:220] Guest: 2023-03-07 10:43:46.241513799 -0800 PST Remote: 2023-03-07 10:43:45.988711 -0800 PST m=+13.091834053 (delta=252.802799ms)
	I0307 10:43:46.071421    8980 fix.go:191] guest clock delta is within tolerance: 252.802799ms
	I0307 10:43:46.071425    8980 start.go:83] releasing machines lock for "running-upgrade-359000", held for 12.487812932s
	I0307 10:43:46.071443    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .DriverName
	I0307 10:43:46.071574    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetIP
	I0307 10:43:46.071653    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .DriverName
	I0307 10:43:46.071940    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .DriverName
	I0307 10:43:46.072038    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .DriverName
	I0307 10:43:46.072115    8980 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0307 10:43:46.072143    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:46.072167    8980 ssh_runner.go:195] Run: cat /version.json
	I0307 10:43:46.072180    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHHostname
	I0307 10:43:46.072248    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHPort
	I0307 10:43:46.072339    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:46.072355    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHPort
	I0307 10:43:46.072450    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHKeyPath
	I0307 10:43:46.072482    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHUsername
	I0307 10:43:46.072564    8980 main.go:141] libmachine: (running-upgrade-359000) Calling .GetSSHUsername
	I0307 10:43:46.072583    8980 sshutil.go:53] new ssh client: &{IP:192.168.64.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	I0307 10:43:46.072658    8980 sshutil.go:53] new ssh client: &{IP:192.168.64.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	W0307 10:43:46.365256    8980 start.go:396] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0307 10:43:46.365332    8980 ssh_runner.go:195] Run: systemctl --version
	I0307 10:43:46.368434    8980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 10:43:46.371895    8980 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:43:46.371936    8980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0307 10:43:46.375583    8980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0307 10:43:46.381361    8980 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0307 10:43:46.381372    8980 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0307 10:43:46.381387    8980 start.go:485] detecting cgroup driver to use...
	I0307 10:43:46.381451    8980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:43:46.389742    8980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0307 10:43:46.394369    8980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:43:46.399013    8980 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:43:46.399056    8980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:43:46.403477    8980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:43:46.407924    8980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:43:46.412296    8980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:43:46.416785    8980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:43:46.422263    8980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:43:46.427037    8980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:43:46.431233    8980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:43:46.435175    8980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:43:46.524539    8980 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:43:46.534875    8980 start.go:485] detecting cgroup driver to use...
	I0307 10:43:46.534948    8980 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:43:46.556539    8980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:43:46.567993    8980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:43:46.582297    8980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:43:46.592170    8980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:43:46.599532    8980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:43:46.608037    8980 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:43:46.690779    8980 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:43:46.776443    8980 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:43:46.776460    8980 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 10:43:46.784625    8980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:43:46.871095    8980 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:43:48.061941    8980 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.190819736s)
	I0307 10:43:48.084692    8980 out.go:177] 
	W0307 10:43:48.105892    8980 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0307 10:43:48.105919    8980 out.go:239] * 
	* 
	W0307 10:43:48.107152    8980 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:43:48.190793    8980 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:140: upgrade from v1.6.2 to HEAD failed: out/minikube-darwin-amd64 start -p running-upgrade-359000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-03-07 10:43:48.22472 -0800 PST m=+2577.352192713
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-359000 -n running-upgrade-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-359000 -n running-upgrade-359000: exit status 6 (133.414817ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 10:43:48.351764    9103 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-359000" does not appear in /Users/jenkins/minikube-integration/15985-3430/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-359000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-359000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-359000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-359000: (1.444771547s)
--- FAIL: TestRunningBinaryUpgrade (116.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (21.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p flannel-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : exit status 90 (21.153235458s)

                                                
                                                
-- stdout --
	* [flannel-713000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node flannel-713000 in cluster flannel-713000
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:52:53.600532   10998 out.go:296] Setting OutFile to fd 1 ...
	I0307 10:52:53.601163   10998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:53.601179   10998 out.go:309] Setting ErrFile to fd 2...
	I0307 10:52:53.601188   10998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:52:53.601416   10998 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
	I0307 10:52:53.603296   10998 out.go:303] Setting JSON to false
	I0307 10:52:53.623771   10998 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4948,"bootTime":1678210225,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:52:53.623883   10998 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0307 10:52:53.658371   10998 out.go:177] * [flannel-713000] minikube v1.29.0 on Darwin 13.2.1
	I0307 10:52:53.716107   10998 notify.go:220] Checking for updates...
	I0307 10:52:53.759897   10998 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 10:52:53.804824   10998 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:52:53.848681   10998 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:52:53.883468   10998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:52:53.961632   10998 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	I0307 10:52:54.024697   10998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:52:54.064236   10998 config.go:182] Loaded profile config "kindnet-713000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:52:54.064288   10998 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 10:52:54.120005   10998 out.go:177] * Using the hyperkit driver based on user configuration
	I0307 10:52:54.141525   10998 start.go:296] selected driver: hyperkit
	I0307 10:52:54.141565   10998 start.go:857] validating driver "hyperkit" against <nil>
	I0307 10:52:54.141590   10998 start.go:868] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:52:54.144527   10998 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:52:54.144644   10998 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15985-3430/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0307 10:52:54.151256   10998 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.29.0
	I0307 10:52:54.154426   10998 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:52:54.154443   10998 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0307 10:52:54.154491   10998 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0307 10:52:54.154680   10998 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 10:52:54.154713   10998 cni.go:84] Creating CNI manager for "flannel"
	I0307 10:52:54.154719   10998 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0307 10:52:54.154730   10998 start_flags.go:319] config:
	{Name:flannel-713000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:flannel-713000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 10:52:54.154825   10998 iso.go:125] acquiring lock: {Name:mk7e0ac9e85418e0580033b84b7097185a725e89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:52:54.200850   10998 out.go:177] * Starting control plane node flannel-713000 in cluster flannel-713000
	I0307 10:52:54.222615   10998 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:52:54.222659   10998 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0307 10:52:54.222676   10998 cache.go:57] Caching tarball of preloaded images
	I0307 10:52:54.222783   10998 preload.go:174] Found /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 10:52:54.222792   10998 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 10:52:54.222876   10998 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/flannel-713000/config.json ...
	I0307 10:52:54.222895   10998 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/flannel-713000/config.json: {Name:mk4c48dc01a2abe28b1871a88d98e2be3eb3b839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:52:54.223152   10998 cache.go:193] Successfully downloaded all kic artifacts
	I0307 10:52:54.223180   10998 start.go:364] acquiring machines lock for flannel-713000: {Name:mk134a6441e29f224c19617a6bd79aa72abb21e6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 10:52:54.223226   10998 start.go:368] acquired machines lock for "flannel-713000" in 38.064µs
	I0307 10:52:54.223253   10998 start.go:93] Provisioning new machine with config: &{Name:flannel-713000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.26.2 ClusterName:flannel-713000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 10:52:54.223315   10998 start.go:125] createHost starting for "" (driver="hyperkit")
	I0307 10:52:54.265735   10998 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0307 10:52:54.266152   10998 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:52:54.266213   10998 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:52:54.274530   10998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54567
	I0307 10:52:54.274886   10998 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:52:54.275323   10998 main.go:141] libmachine: Using API Version  1
	I0307 10:52:54.275334   10998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:52:54.275550   10998 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:52:54.275651   10998 main.go:141] libmachine: (flannel-713000) Calling .GetMachineName
	I0307 10:52:54.275719   10998 main.go:141] libmachine: (flannel-713000) Calling .DriverName
	I0307 10:52:54.275825   10998 start.go:159] libmachine.API.Create for "flannel-713000" (driver="hyperkit")
	I0307 10:52:54.275850   10998 client.go:168] LocalClient.Create starting
	I0307 10:52:54.275891   10998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem
	I0307 10:52:54.275935   10998 main.go:141] libmachine: Decoding PEM data...
	I0307 10:52:54.275953   10998 main.go:141] libmachine: Parsing certificate...
	I0307 10:52:54.276017   10998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem
	I0307 10:52:54.276048   10998 main.go:141] libmachine: Decoding PEM data...
	I0307 10:52:54.276059   10998 main.go:141] libmachine: Parsing certificate...
	I0307 10:52:54.276076   10998 main.go:141] libmachine: Running pre-create checks...
	I0307 10:52:54.276082   10998 main.go:141] libmachine: (flannel-713000) Calling .PreCreateCheck
	I0307 10:52:54.276158   10998 main.go:141] libmachine: (flannel-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:52:54.276347   10998 main.go:141] libmachine: (flannel-713000) Calling .GetConfigRaw
	I0307 10:52:54.276761   10998 main.go:141] libmachine: Creating machine...
	I0307 10:52:54.276770   10998 main.go:141] libmachine: (flannel-713000) Calling .Create
	I0307 10:52:54.276837   10998 main.go:141] libmachine: (flannel-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:52:54.276963   10998 main.go:141] libmachine: (flannel-713000) DBG | I0307 10:52:54.276834   11008 common.go:116] Making disk image using store path: /Users/jenkins/minikube-integration/15985-3430/.minikube
	I0307 10:52:54.277022   10998 main.go:141] libmachine: (flannel-713000) Downloading /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15985-3430/.minikube/cache/iso/amd64/minikube-v1.29.0-1677261626-15923-amd64.iso...
	I0307 10:52:54.461827   10998 main.go:141] libmachine: (flannel-713000) DBG | I0307 10:52:54.461737   11008 common.go:123] Creating ssh key: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/id_rsa...
	I0307 10:52:54.561506   10998 main.go:141] libmachine: (flannel-713000) DBG | I0307 10:52:54.561386   11008 common.go:129] Creating raw disk image: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/flannel-713000.rawdisk...
	I0307 10:52:54.561524   10998 main.go:141] libmachine: (flannel-713000) DBG | Writing magic tar header
	I0307 10:52:54.561538   10998 main.go:141] libmachine: (flannel-713000) DBG | Writing SSH key tar header
	I0307 10:52:54.561791   10998 main.go:141] libmachine: (flannel-713000) DBG | I0307 10:52:54.561741   11008 common.go:143] Fixing permissions on /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000 ...
	I0307 10:52:54.879700   10998 main.go:141] libmachine: (flannel-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:52:54.879720   10998 main.go:141] libmachine: (flannel-713000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/hyperkit.pid
	I0307 10:52:54.879737   10998 main.go:141] libmachine: (flannel-713000) DBG | Using UUID 44b15472-bd19-11ed-8033-149d997fca88
	I0307 10:52:54.907848   10998 main.go:141] libmachine: (flannel-713000) DBG | Generated MAC 36:8c:81:43:42:c3
	I0307 10:52:54.907867   10998 main.go:141] libmachine: (flannel-713000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=flannel-713000
	I0307 10:52:54.907927   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"44b15472-bd19-11ed-8033-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b01b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0307 10:52:54.907967   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"44b15472-bd19-11ed-8033-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b01b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0307 10:52:54.908004   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "44b15472-bd19-11ed-8033-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/flannel-713000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube
/machines/flannel-713000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=flannel-713000"}
	I0307 10:52:54.908044   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 44b15472-bd19-11ed-8033-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/flannel-713000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/tty,log=/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/console-ring -f kexec,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/bzimage,/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/initrd,earlyprintk=serial loglevel=3
console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=flannel-713000"
	I0307 10:52:54.908065   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0307 10:52:54.911052   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 DEBUG: hyperkit: Pid is 11009
	I0307 10:52:54.911451   10998 main.go:141] libmachine: (flannel-713000) DBG | Attempt 0
	I0307 10:52:54.911464   10998 main.go:141] libmachine: (flannel-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:52:54.911516   10998 main.go:141] libmachine: (flannel-713000) DBG | hyperkit pid from json: 11009
	I0307 10:52:54.912443   10998 main.go:141] libmachine: (flannel-713000) DBG | Searching for 36:8c:81:43:42:c3 in /var/db/dhcpd_leases ...
	I0307 10:52:54.912494   10998 main.go:141] libmachine: (flannel-713000) DBG | Found 35 entries in /var/db/dhcpd_leases!
	I0307 10:52:54.912512   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.36 HWAddress:ce:46:3c:69:75:34 ID:1,ce:46:3c:69:75:34 Lease:0x6408d940}
	I0307 10:52:54.912540   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.35 HWAddress:1a:fb:65:4d:fb:af ID:1,1a:fb:65:4d:fb:af Lease:0x6408d931}
	I0307 10:52:54.912555   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.34 HWAddress:92:90:b3:ef:e5:6d ID:1,92:90:b3:ef:e5:6d Lease:0x6408d8d2}
	I0307 10:52:54.912567   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.33 HWAddress:82:2:3a:12:9a:d7 ID:1,82:2:3a:12:9a:d7 Lease:0x6408d8c6}
	I0307 10:52:54.912578   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.32 HWAddress:6e:e0:36:c5:f4:a2 ID:1,6e:e0:36:c5:f4:a2 Lease:0x6407873c}
	I0307 10:52:54.912590   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.31 HWAddress:46:6a:a8:d9:70:ba ID:1,46:6a:a8:d9:70:ba Lease:0x6408d87d}
	I0307 10:52:54.912602   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:62:de:b2:37:ff:cc ID:1,62:de:b2:37:ff:cc Lease:0x64078714}
	I0307 10:52:54.912631   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:72:68:21:c3:8e:56 ID:1,72:68:21:c3:8e:56 Lease:0x6408d806}
	I0307 10:52:54.912647   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:42:85:4b:60:94:31 ID:1,42:85:4b:60:94:31 Lease:0x6408d82e}
	I0307 10:52:54.912672   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:da:6a:34:22:80:89 ID:1,da:6a:34:22:80:89 Lease:0x6408d7b8}
	I0307 10:52:54.912687   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:de:5b:4c:34:9b:d0 ID:1,de:5b:4c:34:9b:d0 Lease:0x6408d70a}
	I0307 10:52:54.912716   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:72:2d:b0:7c:eb:5c ID:1,72:2d:b0:7c:eb:5c Lease:0x64078570}
	I0307 10:52:54.912737   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:5e:31:b6:ba:13:4f ID:1,5e:31:b6:ba:13:4f Lease:0x6408d6c2}
	I0307 10:52:54.912751   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:6:b9:ee:b6:d0:e0 ID:1,6:b9:ee:b6:d0:e0 Lease:0x6408d6a1}
	I0307 10:52:54.912765   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:62:1:39:2a:a:78 ID:1,62:1:39:2a:a:78 Lease:0x64078538}
	I0307 10:52:54.912781   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:fe:30:93:71:e8:2f ID:1,fe:30:93:71:e8:2f Lease:0x6408d66d}
	I0307 10:52:54.912800   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:56:d2:23:11:10:5d ID:1,56:d2:23:11:10:5d Lease:0x6408d654}
	I0307 10:52:54.912830   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:26:d4:c6:4:a2:b7 ID:1,26:d4:c6:4:a2:b7 Lease:0x6408d607}
	I0307 10:52:54.912843   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:d6:4e:83:69:21:c9 ID:1,d6:4e:83:69:21:c9 Lease:0x6408d594}
	I0307 10:52:54.912852   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:4e:6a:ba:6f:8b:5c ID:1,4e:6a:ba:6f:8b:5c Lease:0x6408d545}
	I0307 10:52:54.912861   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:d2:92:d5:11:fb:2f ID:1,d2:92:d5:11:fb:2f Lease:0x6408d4aa}
	I0307 10:52:54.912871   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x6408d3f3}
	I0307 10:52:54.912896   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
	I0307 10:52:54.912905   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d48c}
	I0307 10:52:54.912916   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d45b}
	I0307 10:52:54.912924   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:56:c4:c2:e0:9d:e9 ID:1,56:c4:c2:e0:9d:e9 Lease:0x64077fcf}
	I0307 10:52:54.912933   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:c0:d0:aa:4f:d9 ID:1,a6:c0:d0:aa:4f:d9 Lease:0x64077fa1}
	I0307 10:52:54.912941   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:7e:b9:5c:39:30:f3 ID:1,7e:b9:5c:39:30:f3 Lease:0x6408d0d7}
	I0307 10:52:54.912953   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:b2:81:fa:b9:c:2c ID:1,b2:81:fa:b9:c:2c Lease:0x6408d0ad}
	I0307 10:52:54.912962   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:fa:a2:c:96:5d:9 ID:1,fa:a2:c:96:5d:9 Lease:0x6408d069}
	I0307 10:52:54.912974   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:62:68:6d:b0:9a:ca ID:1,62:68:6d:b0:9a:ca Lease:0x6408cff3}
	I0307 10:52:54.912983   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:46:6d:3b:f7:df:a3 ID:1,46:6d:3b:f7:df:a3 Lease:0x64077e58}
	I0307 10:52:54.912994   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:be:da:45:30:af:9 ID:1,be:da:45:30:af:9 Lease:0x6408cec1}
	I0307 10:52:54.913002   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:42:d:93:cc:c3:2e ID:1,42:d:93:cc:c3:2e Lease:0x64077d36}
	I0307 10:52:54.913012   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:86:96:98:a1:cb:10 ID:1,86:96:98:a1:cb:10 Lease:0x6408cd9a}
	I0307 10:52:54.917348   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0307 10:52:54.925922   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0307 10:52:54.926785   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:52:54.926801   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:52:54.926812   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:52:54.926826   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:52:55.309230   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0307 10:52:55.309246   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0307 10:52:55.413304   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0307 10:52:55.413328   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0307 10:52:55.413344   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0307 10:52:55.413365   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0307 10:52:55.414159   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0307 10:52:55.414171   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0307 10:52:56.913552   10998 main.go:141] libmachine: (flannel-713000) DBG | Attempt 1
	I0307 10:52:56.913569   10998 main.go:141] libmachine: (flannel-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:52:56.913644   10998 main.go:141] libmachine: (flannel-713000) DBG | hyperkit pid from json: 11009
	I0307 10:52:56.914429   10998 main.go:141] libmachine: (flannel-713000) DBG | Searching for 36:8c:81:43:42:c3 in /var/db/dhcpd_leases ...
	I0307 10:52:56.914507   10998 main.go:141] libmachine: (flannel-713000) DBG | Found 35 entries in /var/db/dhcpd_leases!
	I0307 10:52:56.914520   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.36 HWAddress:ce:46:3c:69:75:34 ID:1,ce:46:3c:69:75:34 Lease:0x6408d940}
	I0307 10:52:56.914539   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.35 HWAddress:1a:fb:65:4d:fb:af ID:1,1a:fb:65:4d:fb:af Lease:0x6408d931}
	I0307 10:52:56.914549   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.34 HWAddress:92:90:b3:ef:e5:6d ID:1,92:90:b3:ef:e5:6d Lease:0x6408d8d2}
	I0307 10:52:56.914559   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.33 HWAddress:82:2:3a:12:9a:d7 ID:1,82:2:3a:12:9a:d7 Lease:0x6408d8c6}
	I0307 10:52:56.914566   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.32 HWAddress:6e:e0:36:c5:f4:a2 ID:1,6e:e0:36:c5:f4:a2 Lease:0x6407873c}
	I0307 10:52:56.914581   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.31 HWAddress:46:6a:a8:d9:70:ba ID:1,46:6a:a8:d9:70:ba Lease:0x6408d87d}
	I0307 10:52:56.914596   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:62:de:b2:37:ff:cc ID:1,62:de:b2:37:ff:cc Lease:0x64078714}
	I0307 10:52:56.914611   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:72:68:21:c3:8e:56 ID:1,72:68:21:c3:8e:56 Lease:0x6408d806}
	I0307 10:52:56.914622   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:42:85:4b:60:94:31 ID:1,42:85:4b:60:94:31 Lease:0x6408d82e}
	I0307 10:52:56.914630   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:da:6a:34:22:80:89 ID:1,da:6a:34:22:80:89 Lease:0x6408d7b8}
	I0307 10:52:56.914636   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:de:5b:4c:34:9b:d0 ID:1,de:5b:4c:34:9b:d0 Lease:0x6408d70a}
	I0307 10:52:56.914643   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:72:2d:b0:7c:eb:5c ID:1,72:2d:b0:7c:eb:5c Lease:0x64078570}
	I0307 10:52:56.914650   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:5e:31:b6:ba:13:4f ID:1,5e:31:b6:ba:13:4f Lease:0x6408d6c2}
	I0307 10:52:56.914666   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:6:b9:ee:b6:d0:e0 ID:1,6:b9:ee:b6:d0:e0 Lease:0x6408d6a1}
	I0307 10:52:56.914678   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:62:1:39:2a:a:78 ID:1,62:1:39:2a:a:78 Lease:0x64078538}
	I0307 10:52:56.914687   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:fe:30:93:71:e8:2f ID:1,fe:30:93:71:e8:2f Lease:0x6408d66d}
	I0307 10:52:56.914696   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:56:d2:23:11:10:5d ID:1,56:d2:23:11:10:5d Lease:0x6408d654}
	I0307 10:52:56.914704   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:26:d4:c6:4:a2:b7 ID:1,26:d4:c6:4:a2:b7 Lease:0x6408d607}
	I0307 10:52:56.914713   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:d6:4e:83:69:21:c9 ID:1,d6:4e:83:69:21:c9 Lease:0x6408d594}
	I0307 10:52:56.914720   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:4e:6a:ba:6f:8b:5c ID:1,4e:6a:ba:6f:8b:5c Lease:0x6408d545}
	I0307 10:52:56.914729   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:d2:92:d5:11:fb:2f ID:1,d2:92:d5:11:fb:2f Lease:0x6408d4aa}
	I0307 10:52:56.914737   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x6408d3f3}
	I0307 10:52:56.914748   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
	I0307 10:52:56.914756   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d48c}
	I0307 10:52:56.914765   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d45b}
	I0307 10:52:56.914772   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:56:c4:c2:e0:9d:e9 ID:1,56:c4:c2:e0:9d:e9 Lease:0x64077fcf}
	I0307 10:52:56.914781   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:c0:d0:aa:4f:d9 ID:1,a6:c0:d0:aa:4f:d9 Lease:0x64077fa1}
	I0307 10:52:56.914798   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:7e:b9:5c:39:30:f3 ID:1,7e:b9:5c:39:30:f3 Lease:0x6408d0d7}
	I0307 10:52:56.914808   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:b2:81:fa:b9:c:2c ID:1,b2:81:fa:b9:c:2c Lease:0x6408d0ad}
	I0307 10:52:56.914816   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:fa:a2:c:96:5d:9 ID:1,fa:a2:c:96:5d:9 Lease:0x6408d069}
	I0307 10:52:56.914824   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:62:68:6d:b0:9a:ca ID:1,62:68:6d:b0:9a:ca Lease:0x6408cff3}
	I0307 10:52:56.914832   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:46:6d:3b:f7:df:a3 ID:1,46:6d:3b:f7:df:a3 Lease:0x64077e58}
	I0307 10:52:56.914841   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:be:da:45:30:af:9 ID:1,be:da:45:30:af:9 Lease:0x6408cec1}
	I0307 10:52:56.914849   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:42:d:93:cc:c3:2e ID:1,42:d:93:cc:c3:2e Lease:0x64077d36}
	I0307 10:52:56.914857   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:86:96:98:a1:cb:10 ID:1,86:96:98:a1:cb:10 Lease:0x6408cd9a}
	I0307 10:52:58.914907   10998 main.go:141] libmachine: (flannel-713000) DBG | Attempt 2
	I0307 10:52:58.914924   10998 main.go:141] libmachine: (flannel-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:52:58.914982   10998 main.go:141] libmachine: (flannel-713000) DBG | hyperkit pid from json: 11009
	I0307 10:52:58.915692   10998 main.go:141] libmachine: (flannel-713000) DBG | Searching for 36:8c:81:43:42:c3 in /var/db/dhcpd_leases ...
	I0307 10:52:58.915789   10998 main.go:141] libmachine: (flannel-713000) DBG | Found 35 entries in /var/db/dhcpd_leases!
	I0307 10:52:58.915800   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.36 HWAddress:ce:46:3c:69:75:34 ID:1,ce:46:3c:69:75:34 Lease:0x6408d940}
	I0307 10:52:58.915808   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.35 HWAddress:1a:fb:65:4d:fb:af ID:1,1a:fb:65:4d:fb:af Lease:0x6408d931}
	I0307 10:52:58.915815   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.34 HWAddress:92:90:b3:ef:e5:6d ID:1,92:90:b3:ef:e5:6d Lease:0x6408d8d2}
	I0307 10:52:58.915824   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.33 HWAddress:82:2:3a:12:9a:d7 ID:1,82:2:3a:12:9a:d7 Lease:0x6408d8c6}
	I0307 10:52:58.915835   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.32 HWAddress:6e:e0:36:c5:f4:a2 ID:1,6e:e0:36:c5:f4:a2 Lease:0x6407873c}
	I0307 10:52:58.915844   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.31 HWAddress:46:6a:a8:d9:70:ba ID:1,46:6a:a8:d9:70:ba Lease:0x6408d87d}
	I0307 10:52:58.915850   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:62:de:b2:37:ff:cc ID:1,62:de:b2:37:ff:cc Lease:0x64078714}
	I0307 10:52:58.915868   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:72:68:21:c3:8e:56 ID:1,72:68:21:c3:8e:56 Lease:0x6408d806}
	I0307 10:52:58.915877   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:42:85:4b:60:94:31 ID:1,42:85:4b:60:94:31 Lease:0x6408d82e}
	I0307 10:52:58.915884   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:da:6a:34:22:80:89 ID:1,da:6a:34:22:80:89 Lease:0x6408d7b8}
	I0307 10:52:58.915893   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:de:5b:4c:34:9b:d0 ID:1,de:5b:4c:34:9b:d0 Lease:0x6408d70a}
	I0307 10:52:58.915908   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:72:2d:b0:7c:eb:5c ID:1,72:2d:b0:7c:eb:5c Lease:0x64078570}
	I0307 10:52:58.915916   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:5e:31:b6:ba:13:4f ID:1,5e:31:b6:ba:13:4f Lease:0x6408d6c2}
	I0307 10:52:58.915926   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:6:b9:ee:b6:d0:e0 ID:1,6:b9:ee:b6:d0:e0 Lease:0x6408d6a1}
	I0307 10:52:58.915933   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:62:1:39:2a:a:78 ID:1,62:1:39:2a:a:78 Lease:0x64078538}
	I0307 10:52:58.915940   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:fe:30:93:71:e8:2f ID:1,fe:30:93:71:e8:2f Lease:0x6408d66d}
	I0307 10:52:58.915947   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:56:d2:23:11:10:5d ID:1,56:d2:23:11:10:5d Lease:0x6408d654}
	I0307 10:52:58.915957   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:26:d4:c6:4:a2:b7 ID:1,26:d4:c6:4:a2:b7 Lease:0x6408d607}
	I0307 10:52:58.915964   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:d6:4e:83:69:21:c9 ID:1,d6:4e:83:69:21:c9 Lease:0x6408d594}
	I0307 10:52:58.915971   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:4e:6a:ba:6f:8b:5c ID:1,4e:6a:ba:6f:8b:5c Lease:0x6408d545}
	I0307 10:52:58.915977   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:d2:92:d5:11:fb:2f ID:1,d2:92:d5:11:fb:2f Lease:0x6408d4aa}
	I0307 10:52:58.915985   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x6408d3f3}
	I0307 10:52:58.915993   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
	I0307 10:52:58.916002   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d48c}
	I0307 10:52:58.916009   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d45b}
	I0307 10:52:58.916016   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:56:c4:c2:e0:9d:e9 ID:1,56:c4:c2:e0:9d:e9 Lease:0x64077fcf}
	I0307 10:52:58.916023   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:c0:d0:aa:4f:d9 ID:1,a6:c0:d0:aa:4f:d9 Lease:0x64077fa1}
	I0307 10:52:58.916041   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:7e:b9:5c:39:30:f3 ID:1,7e:b9:5c:39:30:f3 Lease:0x6408d0d7}
	I0307 10:52:58.916058   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:b2:81:fa:b9:c:2c ID:1,b2:81:fa:b9:c:2c Lease:0x6408d0ad}
	I0307 10:52:58.916075   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:fa:a2:c:96:5d:9 ID:1,fa:a2:c:96:5d:9 Lease:0x6408d069}
	I0307 10:52:58.916085   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:62:68:6d:b0:9a:ca ID:1,62:68:6d:b0:9a:ca Lease:0x6408cff3}
	I0307 10:52:58.916094   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:46:6d:3b:f7:df:a3 ID:1,46:6d:3b:f7:df:a3 Lease:0x64077e58}
	I0307 10:52:58.916101   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:be:da:45:30:af:9 ID:1,be:da:45:30:af:9 Lease:0x6408cec1}
	I0307 10:52:58.916118   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:42:d:93:cc:c3:2e ID:1,42:d:93:cc:c3:2e Lease:0x64077d36}
	I0307 10:52:58.916129   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:86:96:98:a1:cb:10 ID:1,86:96:98:a1:cb:10 Lease:0x6408cd9a}
	I0307 10:52:59.932836   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0307 10:52:59.932855   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0307 10:52:59.932864   10998 main.go:141] libmachine: (flannel-713000) DBG | 2023/03/07 10:52:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0307 10:53:00.917083   10998 main.go:141] libmachine: (flannel-713000) DBG | Attempt 3
	I0307 10:53:00.917098   10998 main.go:141] libmachine: (flannel-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:53:00.917209   10998 main.go:141] libmachine: (flannel-713000) DBG | hyperkit pid from json: 11009
	I0307 10:53:00.917955   10998 main.go:141] libmachine: (flannel-713000) DBG | Searching for 36:8c:81:43:42:c3 in /var/db/dhcpd_leases ...
	I0307 10:53:00.918027   10998 main.go:141] libmachine: (flannel-713000) DBG | Found 35 entries in /var/db/dhcpd_leases!
	I0307 10:53:00.918038   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.36 HWAddress:ce:46:3c:69:75:34 ID:1,ce:46:3c:69:75:34 Lease:0x6408d940}
	I0307 10:53:00.918057   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.35 HWAddress:1a:fb:65:4d:fb:af ID:1,1a:fb:65:4d:fb:af Lease:0x6408d931}
	I0307 10:53:00.918065   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.34 HWAddress:92:90:b3:ef:e5:6d ID:1,92:90:b3:ef:e5:6d Lease:0x6408d8d2}
	I0307 10:53:00.918073   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.33 HWAddress:82:2:3a:12:9a:d7 ID:1,82:2:3a:12:9a:d7 Lease:0x6408d8c6}
	I0307 10:53:00.918081   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.32 HWAddress:6e:e0:36:c5:f4:a2 ID:1,6e:e0:36:c5:f4:a2 Lease:0x6407873c}
	I0307 10:53:00.918102   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.31 HWAddress:46:6a:a8:d9:70:ba ID:1,46:6a:a8:d9:70:ba Lease:0x6408d87d}
	I0307 10:53:00.918123   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:62:de:b2:37:ff:cc ID:1,62:de:b2:37:ff:cc Lease:0x64078714}
	I0307 10:53:00.918141   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:72:68:21:c3:8e:56 ID:1,72:68:21:c3:8e:56 Lease:0x6408d806}
	I0307 10:53:00.918159   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:42:85:4b:60:94:31 ID:1,42:85:4b:60:94:31 Lease:0x6408d82e}
	I0307 10:53:00.918171   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:da:6a:34:22:80:89 ID:1,da:6a:34:22:80:89 Lease:0x6408d7b8}
	I0307 10:53:00.918180   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:de:5b:4c:34:9b:d0 ID:1,de:5b:4c:34:9b:d0 Lease:0x6408d70a}
	I0307 10:53:00.918187   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:72:2d:b0:7c:eb:5c ID:1,72:2d:b0:7c:eb:5c Lease:0x64078570}
	I0307 10:53:00.918195   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:5e:31:b6:ba:13:4f ID:1,5e:31:b6:ba:13:4f Lease:0x6408d6c2}
	I0307 10:53:00.918202   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:6:b9:ee:b6:d0:e0 ID:1,6:b9:ee:b6:d0:e0 Lease:0x6408d6a1}
	I0307 10:53:00.918212   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:62:1:39:2a:a:78 ID:1,62:1:39:2a:a:78 Lease:0x64078538}
	I0307 10:53:00.918219   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:fe:30:93:71:e8:2f ID:1,fe:30:93:71:e8:2f Lease:0x6408d66d}
	I0307 10:53:00.918226   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:56:d2:23:11:10:5d ID:1,56:d2:23:11:10:5d Lease:0x6408d654}
	I0307 10:53:00.918234   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:26:d4:c6:4:a2:b7 ID:1,26:d4:c6:4:a2:b7 Lease:0x6408d607}
	I0307 10:53:00.918248   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:d6:4e:83:69:21:c9 ID:1,d6:4e:83:69:21:c9 Lease:0x6408d594}
	I0307 10:53:00.918261   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:4e:6a:ba:6f:8b:5c ID:1,4e:6a:ba:6f:8b:5c Lease:0x6408d545}
	I0307 10:53:00.918270   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:d2:92:d5:11:fb:2f ID:1,d2:92:d5:11:fb:2f Lease:0x6408d4aa}
	I0307 10:53:00.918279   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:12:aa:e8:53:6e:6b ID:1,12:aa:e8:53:6e:6b Lease:0x6408d3f3}
	I0307 10:53:00.918287   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ca:14:a2:6d:d0:c ID:1,ca:14:a2:6d:d0:c Lease:0x6407819f}
	I0307 10:53:00.918315   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ba:65:3c:6f:8d:dc ID:1,ba:65:3c:6f:8d:dc Lease:0x6408d48c}
	I0307 10:53:00.918330   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:f2:4e:cd:75:18:a7 ID:1,f2:4e:cd:75:18:a7 Lease:0x6408d45b}
	I0307 10:53:00.918338   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:56:c4:c2:e0:9d:e9 ID:1,56:c4:c2:e0:9d:e9 Lease:0x64077fcf}
	I0307 10:53:00.918346   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:c0:d0:aa:4f:d9 ID:1,a6:c0:d0:aa:4f:d9 Lease:0x64077fa1}
	I0307 10:53:00.918355   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:7e:b9:5c:39:30:f3 ID:1,7e:b9:5c:39:30:f3 Lease:0x6408d0d7}
	I0307 10:53:00.918363   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:b2:81:fa:b9:c:2c ID:1,b2:81:fa:b9:c:2c Lease:0x6408d0ad}
	I0307 10:53:00.918371   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:fa:a2:c:96:5d:9 ID:1,fa:a2:c:96:5d:9 Lease:0x6408d069}
	I0307 10:53:00.918378   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:62:68:6d:b0:9a:ca ID:1,62:68:6d:b0:9a:ca Lease:0x6408cff3}
	I0307 10:53:00.918386   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:46:6d:3b:f7:df:a3 ID:1,46:6d:3b:f7:df:a3 Lease:0x64077e58}
	I0307 10:53:00.918394   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:be:da:45:30:af:9 ID:1,be:da:45:30:af:9 Lease:0x6408cec1}
	I0307 10:53:00.918402   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:42:d:93:cc:c3:2e ID:1,42:d:93:cc:c3:2e Lease:0x64077d36}
	I0307 10:53:00.918410   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:86:96:98:a1:cb:10 ID:1,86:96:98:a1:cb:10 Lease:0x6408cd9a}
	I0307 10:53:02.918248   10998 main.go:141] libmachine: (flannel-713000) DBG | Attempt 4
	I0307 10:53:02.918266   10998 main.go:141] libmachine: (flannel-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:53:02.918316   10998 main.go:141] libmachine: (flannel-713000) DBG | hyperkit pid from json: 11009
	I0307 10:53:02.919101   10998 main.go:141] libmachine: (flannel-713000) DBG | Searching for 36:8c:81:43:42:c3 in /var/db/dhcpd_leases ...
	I0307 10:53:02.919175   10998 main.go:141] libmachine: (flannel-713000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I0307 10:53:02.919188   10998 main.go:141] libmachine: (flannel-713000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.37 HWAddress:36:8c:81:43:42:c3 ID:1,36:8c:81:43:42:c3 Lease:0x6408d98e}
	I0307 10:53:02.919208   10998 main.go:141] libmachine: (flannel-713000) DBG | Found match: 36:8c:81:43:42:c3
	I0307 10:53:02.919215   10998 main.go:141] libmachine: (flannel-713000) DBG | IP: 192.168.64.37
	I0307 10:53:02.919274   10998 main.go:141] libmachine: (flannel-713000) Calling .GetConfigRaw
	I0307 10:53:02.919850   10998 main.go:141] libmachine: (flannel-713000) Calling .DriverName
	I0307 10:53:02.919949   10998 main.go:141] libmachine: (flannel-713000) Calling .DriverName
	I0307 10:53:02.920037   10998 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0307 10:53:02.920050   10998 main.go:141] libmachine: (flannel-713000) Calling .GetState
	I0307 10:53:02.920126   10998 main.go:141] libmachine: (flannel-713000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:53:02.920183   10998 main.go:141] libmachine: (flannel-713000) DBG | hyperkit pid from json: 11009
	I0307 10:53:02.920940   10998 main.go:141] libmachine: Detecting operating system of created instance...
	I0307 10:53:02.920951   10998 main.go:141] libmachine: Waiting for SSH to be available...
	I0307 10:53:02.920959   10998 main.go:141] libmachine: Getting to WaitForSSH function...
	I0307 10:53:02.920965   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:02.921053   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:02.921137   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:02.921224   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:02.921316   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:02.921442   10998 main.go:141] libmachine: Using SSH client type: native
	I0307 10:53:02.921832   10998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.37 22 <nil> <nil>}
	I0307 10:53:02.921840   10998 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0307 10:53:04.000467   10998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:53:04.000481   10998 main.go:141] libmachine: Detecting the provisioner...
	I0307 10:53:04.000488   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:04.000619   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:04.000713   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.000785   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.000870   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:04.000991   10998 main.go:141] libmachine: Using SSH client type: native
	I0307 10:53:04.001316   10998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.37 22 <nil> <nil>}
	I0307 10:53:04.001325   10998 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0307 10:53:04.072124   10998 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gab7f370-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0307 10:53:04.072193   10998 main.go:141] libmachine: found compatible host: buildroot
	I0307 10:53:04.072200   10998 main.go:141] libmachine: Provisioning with buildroot...
	I0307 10:53:04.072207   10998 main.go:141] libmachine: (flannel-713000) Calling .GetMachineName
	I0307 10:53:04.072338   10998 buildroot.go:166] provisioning hostname "flannel-713000"
	I0307 10:53:04.072350   10998 main.go:141] libmachine: (flannel-713000) Calling .GetMachineName
	I0307 10:53:04.072433   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:04.072526   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:04.072606   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.072689   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.072772   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:04.072893   10998 main.go:141] libmachine: Using SSH client type: native
	I0307 10:53:04.073207   10998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.37 22 <nil> <nil>}
	I0307 10:53:04.073217   10998 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-713000 && echo "flannel-713000" | sudo tee /etc/hostname
	I0307 10:53:04.154989   10998 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-713000
	
	I0307 10:53:04.155014   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:04.155150   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:04.155260   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.155354   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.155446   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:04.155571   10998 main.go:141] libmachine: Using SSH client type: native
	I0307 10:53:04.155892   10998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.37 22 <nil> <nil>}
	I0307 10:53:04.155904   10998 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-713000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-713000/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-713000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 10:53:04.233959   10998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 10:53:04.233982   10998 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15985-3430/.minikube CaCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15985-3430/.minikube}
	I0307 10:53:04.233996   10998 buildroot.go:174] setting up certificates
	I0307 10:53:04.234005   10998 provision.go:83] configureAuth start
	I0307 10:53:04.234012   10998 main.go:141] libmachine: (flannel-713000) Calling .GetMachineName
	I0307 10:53:04.234159   10998 main.go:141] libmachine: (flannel-713000) Calling .GetIP
	I0307 10:53:04.234253   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:04.234338   10998 provision.go:138] copyHostCerts
	I0307 10:53:04.234432   10998 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem, removing ...
	I0307 10:53:04.234441   10998 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem
	I0307 10:53:04.238159   10998 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/key.pem (1675 bytes)
	I0307 10:53:04.259366   10998 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem, removing ...
	I0307 10:53:04.259382   10998 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem
	I0307 10:53:04.259529   10998 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/ca.pem (1082 bytes)
	I0307 10:53:04.259842   10998 exec_runner.go:144] found /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem, removing ...
	I0307 10:53:04.259854   10998 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem
	I0307 10:53:04.259977   10998 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15985-3430/.minikube/cert.pem (1123 bytes)
	I0307 10:53:04.260212   10998 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca-key.pem org=jenkins.flannel-713000 san=[192.168.64.37 192.168.64.37 localhost 127.0.0.1 minikube flannel-713000]
	I0307 10:53:04.536274   10998 provision.go:172] copyRemoteCerts
	I0307 10:53:04.536343   10998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 10:53:04.536360   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:04.536501   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:04.536612   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.536705   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:04.536794   10998 sshutil.go:53] new ssh client: &{IP:192.168.64.37 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/id_rsa Username:docker}
	I0307 10:53:04.579458   10998 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0307 10:53:04.595336   10998 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0307 10:53:04.611062   10998 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 10:53:04.627502   10998 provision.go:86] duration metric: configureAuth took 393.478643ms
	I0307 10:53:04.627514   10998 buildroot.go:189] setting minikube options for container-runtime
	I0307 10:53:04.627658   10998 config.go:182] Loaded profile config "flannel-713000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:53:04.627672   10998 main.go:141] libmachine: (flannel-713000) Calling .DriverName
	I0307 10:53:04.627812   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:04.627897   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:04.627982   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.628054   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.628136   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:04.628256   10998 main.go:141] libmachine: Using SSH client type: native
	I0307 10:53:04.628551   10998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.37 22 <nil> <nil>}
	I0307 10:53:04.628560   10998 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 10:53:04.700717   10998 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 10:53:04.700729   10998 buildroot.go:70] root file system type: tmpfs
	I0307 10:53:04.700804   10998 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 10:53:04.700818   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:04.700945   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:04.701030   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.701123   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.701207   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:04.701332   10998 main.go:141] libmachine: Using SSH client type: native
	I0307 10:53:04.701633   10998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.37 22 <nil> <nil>}
	I0307 10:53:04.701679   10998 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 10:53:04.782592   10998 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 10:53:04.782607   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:04.782753   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:04.782871   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.782950   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:04.783033   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:04.783157   10998 main.go:141] libmachine: Using SSH client type: native
	I0307 10:53:04.783459   10998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.37 22 <nil> <nil>}
	I0307 10:53:04.783472   10998 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 10:53:05.262887   10998 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 10:53:05.262905   10998 main.go:141] libmachine: Checking connection to Docker...
	I0307 10:53:05.262912   10998 main.go:141] libmachine: (flannel-713000) Calling .GetURL
	I0307 10:53:05.263043   10998 main.go:141] libmachine: Docker is up and running!
	I0307 10:53:05.263052   10998 main.go:141] libmachine: Reticulating splines...
	I0307 10:53:05.263057   10998 client.go:171] LocalClient.Create took 10.987096714s
	I0307 10:53:05.263067   10998 start.go:167] duration metric: libmachine.API.Create for "flannel-713000" took 10.987138626s
	I0307 10:53:05.263077   10998 start.go:300] post-start starting for "flannel-713000" (driver="hyperkit")
	I0307 10:53:05.263082   10998 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 10:53:05.263096   10998 main.go:141] libmachine: (flannel-713000) Calling .DriverName
	I0307 10:53:05.263241   10998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 10:53:05.263259   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:05.263355   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:05.263442   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:05.263521   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:05.263603   10998 sshutil.go:53] new ssh client: &{IP:192.168.64.37 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/id_rsa Username:docker}
	I0307 10:53:05.307300   10998 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 10:53:05.309924   10998 info.go:137] Remote host: Buildroot 2021.02.12
	I0307 10:53:05.309936   10998 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/addons for local assets ...
	I0307 10:53:05.310030   10998 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15985-3430/.minikube/files for local assets ...
	I0307 10:53:05.310206   10998 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem -> 39032.pem in /etc/ssl/certs
	I0307 10:53:05.310398   10998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 10:53:05.316425   10998 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/ssl/certs/39032.pem --> /etc/ssl/certs/39032.pem (1708 bytes)
	I0307 10:53:05.331754   10998 start.go:303] post-start completed in 68.668398ms
	I0307 10:53:05.331785   10998 main.go:141] libmachine: (flannel-713000) Calling .GetConfigRaw
	I0307 10:53:05.332330   10998 main.go:141] libmachine: (flannel-713000) Calling .GetIP
	I0307 10:53:05.332472   10998 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/flannel-713000/config.json ...
	I0307 10:53:05.332754   10998 start.go:128] duration metric: createHost completed in 11.109328444s
	I0307 10:53:05.332770   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:05.332861   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:05.332945   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:05.333030   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:05.333102   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:05.333242   10998 main.go:141] libmachine: Using SSH client type: native
	I0307 10:53:05.333584   10998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f6c0] 0x1412600 <nil>  [] 0s} 192.168.64.37 22 <nil> <nil>}
	I0307 10:53:05.333594   10998 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 10:53:05.405719   10998 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678215185.520844958
	
	I0307 10:53:05.405734   10998 fix.go:207] guest clock: 1678215185.520844958
	I0307 10:53:05.405739   10998 fix.go:220] Guest: 2023-03-07 10:53:05.520844958 -0800 PST Remote: 2023-03-07 10:53:05.332763 -0800 PST m=+11.768664750 (delta=188.081958ms)
	I0307 10:53:05.405757   10998 fix.go:191] guest clock delta is within tolerance: 188.081958ms
	I0307 10:53:05.405761   10998 start.go:83] releasing machines lock for "flannel-713000", held for 11.182423592s
	I0307 10:53:05.405779   10998 main.go:141] libmachine: (flannel-713000) Calling .DriverName
	I0307 10:53:05.405907   10998 main.go:141] libmachine: (flannel-713000) Calling .GetIP
	I0307 10:53:05.405997   10998 main.go:141] libmachine: (flannel-713000) Calling .DriverName
	I0307 10:53:05.406291   10998 main.go:141] libmachine: (flannel-713000) Calling .DriverName
	I0307 10:53:05.406395   10998 main.go:141] libmachine: (flannel-713000) Calling .DriverName
	I0307 10:53:05.406469   10998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 10:53:05.406493   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:05.406511   10998 ssh_runner.go:195] Run: cat /version.json
	I0307 10:53:05.406520   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHHostname
	I0307 10:53:05.406585   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:05.406613   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHPort
	I0307 10:53:05.406710   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:05.406731   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHKeyPath
	I0307 10:53:05.406796   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:05.406813   10998 main.go:141] libmachine: (flannel-713000) Calling .GetSSHUsername
	I0307 10:53:05.406885   10998 sshutil.go:53] new ssh client: &{IP:192.168.64.37 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/id_rsa Username:docker}
	I0307 10:53:05.406898   10998 sshutil.go:53] new ssh client: &{IP:192.168.64.37 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/flannel-713000/id_rsa Username:docker}
	I0307 10:53:05.443747   10998 ssh_runner.go:195] Run: systemctl --version
	I0307 10:53:05.447846   10998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 10:53:05.484490   10998 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 10:53:05.484566   10998 ssh_runner.go:195] Run: which cri-dockerd
	I0307 10:53:05.487397   10998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 10:53:05.493648   10998 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0307 10:53:05.504579   10998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 10:53:05.515043   10998 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 10:53:05.515057   10998 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:53:05.515143   10998 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 10:53:05.530876   10998 docker.go:630] Got preloaded images: 
	I0307 10:53:05.530888   10998 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.2 wasn't preloaded
	I0307 10:53:05.530942   10998 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 10:53:05.537526   10998 ssh_runner.go:195] Run: which lz4
	I0307 10:53:05.540035   10998 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0307 10:53:05.542517   10998 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 10:53:05.542534   10998 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416257894 bytes)
	I0307 10:53:06.640331   10998 docker.go:594] Took 1.100335 seconds to copy over tarball
	I0307 10:53:06.640391   10998 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 10:53:10.812663   10998 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.172213466s)
	I0307 10:53:10.812677   10998 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 10:53:10.841003   10998 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 10:53:10.848275   10998 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0307 10:53:10.859900   10998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:53:10.951607   10998 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:53:12.382821   10998 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.431180182s)
	I0307 10:53:12.382849   10998 start.go:485] detecting cgroup driver to use...
	I0307 10:53:12.382926   10998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:53:12.395488   10998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 10:53:12.402562   10998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 10:53:12.409565   10998 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 10:53:12.409632   10998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 10:53:12.417024   10998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:53:12.424398   10998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 10:53:12.431487   10998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 10:53:12.438383   10998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 10:53:12.445421   10998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 10:53:12.452100   10998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 10:53:12.458261   10998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 10:53:12.464229   10998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:53:12.548138   10998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 10:53:12.561512   10998 start.go:485] detecting cgroup driver to use...
	I0307 10:53:12.561580   10998 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 10:53:12.574752   10998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:53:12.592422   10998 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 10:53:12.605492   10998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 10:53:12.615062   10998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:53:12.624089   10998 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 10:53:12.646113   10998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 10:53:12.655281   10998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 10:53:12.666843   10998 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 10:53:12.751148   10998 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 10:53:12.841880   10998 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 10:53:12.841898   10998 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0307 10:53:12.853819   10998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:53:12.937754   10998 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 10:53:14.174137   10998 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.236351247s)
	I0307 10:53:14.174210   10998 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:53:14.272402   10998 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 10:53:14.361393   10998 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 10:53:14.449343   10998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 10:53:14.546076   10998 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 10:53:14.580797   10998 out.go:177] 
	W0307 10:53:14.601994   10998 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0307 10:53:14.602002   10998 out.go:239] * 
	* 
	W0307 10:53:14.602617   10998 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 10:53:14.665049   10998 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/flannel/Start (21.17s)

                                                
                                    

Test pass (284/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 38.22
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.26.2/json-events 21.59
11 TestDownloadOnly/v1.26.2/preload-exists 0
14 TestDownloadOnly/v1.26.2/kubectl 0
15 TestDownloadOnly/v1.26.2/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.41
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
19 TestBinaryMirror 0.99
20 TestOffline 63.2
22 TestAddons/Setup 144.93
24 TestAddons/parallel/Registry 16.84
25 TestAddons/parallel/Ingress 19.39
26 TestAddons/parallel/MetricsServer 5.47
27 TestAddons/parallel/HelmTiller 13.4
29 TestAddons/parallel/CSI 44.74
30 TestAddons/parallel/Headlamp 9.39
31 TestAddons/parallel/CloudSpanner 5.34
34 TestAddons/serial/GCPAuth/Namespaces 0.09
35 TestAddons/StoppedEnableDisable 8.56
36 TestCertOptions 45.14
37 TestCertExpiration 254.96
38 TestDockerFlags 47.22
39 TestForceSystemdFlag 47.21
40 TestForceSystemdEnv 48.15
42 TestHyperKitDriverInstallOrUpdate 9.08
45 TestErrorSpam/setup 39.05
46 TestErrorSpam/start 1.31
47 TestErrorSpam/status 0.46
48 TestErrorSpam/pause 1.24
49 TestErrorSpam/unpause 1.31
50 TestErrorSpam/stop 3.65
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 59.92
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 39.21
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 7.75
62 TestFunctional/serial/CacheCmd/cache/add_local 1.43
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.15
66 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.51
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.79
70 TestFunctional/serial/ExtraConfig 47.43
71 TestFunctional/serial/ComponentHealth 0.05
72 TestFunctional/serial/LogsCmd 2.94
73 TestFunctional/serial/LogsFileCmd 2.6
75 TestFunctional/parallel/ConfigCmd 0.45
76 TestFunctional/parallel/DashboardCmd 12.2
77 TestFunctional/parallel/DryRun 0.89
78 TestFunctional/parallel/InternationalLanguage 0.45
79 TestFunctional/parallel/StatusCmd 0.44
83 TestFunctional/parallel/ServiceCmdConnect 8.55
84 TestFunctional/parallel/AddonsCmd 0.25
85 TestFunctional/parallel/PersistentVolumeClaim 26.76
87 TestFunctional/parallel/SSHCmd 0.29
88 TestFunctional/parallel/CpCmd 0.57
89 TestFunctional/parallel/MySQL 21.77
90 TestFunctional/parallel/FileSync 0.15
91 TestFunctional/parallel/CertSync 1.01
95 TestFunctional/parallel/NodeLabels 0.08
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.11
99 TestFunctional/parallel/License 0.73
100 TestFunctional/parallel/Version/short 0.13
101 TestFunctional/parallel/Version/components 0.35
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.14
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.15
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.14
106 TestFunctional/parallel/ImageCommands/ImageBuild 3.85
107 TestFunctional/parallel/ImageCommands/Setup 3.11
108 TestFunctional/parallel/DockerEnv/bash 0.74
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.94
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.11
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.58
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.12
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.42
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.05
119 TestFunctional/parallel/ServiceCmd/DeployApp 12.11
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.13
124 TestFunctional/parallel/ServiceCmd/List 0.35
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.23
127 TestFunctional/parallel/ServiceCmd/Format 0.23
128 TestFunctional/parallel/ServiceCmd/URL 0.23
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
136 TestFunctional/parallel/ProfileCmd/profile_list 0.27
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
138 TestFunctional/parallel/MountCmd/any-port 7.75
139 TestFunctional/parallel/MountCmd/specific-port 1.32
140 TestFunctional/delete_addon-resizer_images 0.15
141 TestFunctional/delete_my-image_image 0.06
142 TestFunctional/delete_minikube_cached_images 0.06
146 TestImageBuild/serial/NormalBuild 2.07
147 TestImageBuild/serial/BuildWithBuildArg 0.93
148 TestImageBuild/serial/BuildWithDockerIgnore 0.3
149 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.24
152 TestIngressAddonLegacy/StartLegacyK8sCluster 76.3
154 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.76
155 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
156 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.75
159 TestJSONOutput/start/Command 56.97
160 TestJSONOutput/start/Audit 0
162 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/pause/Command 0.45
166 TestJSONOutput/pause/Audit 0
168 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/unpause/Command 0.45
172 TestJSONOutput/unpause/Audit 0
174 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/stop/Command 8.15
178 TestJSONOutput/stop/Audit 0
180 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
182 TestErrorJSONOutput 0.76
187 TestMainNoArgs 0.07
188 TestMinikubeProfile 93.37
191 TestMountStart/serial/StartWithMountFirst 15.5
192 TestMountStart/serial/VerifyMountFirst 0.29
193 TestMountStart/serial/StartWithMountSecond 15.44
194 TestMountStart/serial/VerifyMountSecond 0.27
195 TestMountStart/serial/DeleteFirst 2.37
196 TestMountStart/serial/VerifyMountPostDelete 0.27
197 TestMountStart/serial/Stop 2.2
198 TestMountStart/serial/RestartStopped 40.75
199 TestMountStart/serial/VerifyMountPostStop 0.28
202 TestMultiNode/serial/FreshStart2Nodes 95.49
203 TestMultiNode/serial/DeployApp2Nodes 5.35
204 TestMultiNode/serial/PingHostFrom2Pods 0.86
205 TestMultiNode/serial/AddNode 407
206 TestMultiNode/serial/ProfileList 0.2
207 TestMultiNode/serial/CopyFile 4.92
208 TestMultiNode/serial/StopNode 2.71
209 TestMultiNode/serial/StartAfterStop 29.53
211 TestMultiNode/serial/DeleteNode 8.8
212 TestMultiNode/serial/StopMultiNode 16.46
213 TestMultiNode/serial/RestartMultiNode 76.83
214 TestMultiNode/serial/ValidateNameConflict 46.95
218 TestPreload 174.4
220 TestScheduledStopUnix 107.75
221 TestSkaffold 83.96
226 TestKubernetesUpgrade 151.66
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.11
240 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.79
241 TestStoppedBinaryUpgrade/Setup 2.55
242 TestStoppedBinaryUpgrade/Upgrade 161.76
244 TestPause/serial/Start 53.72
245 TestPause/serial/SecondStartNoReconfiguration 52.56
246 TestStoppedBinaryUpgrade/MinikubeLogs 2.49
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.56
256 TestNoKubernetes/serial/StartWithK8s 41.15
257 TestPause/serial/Pause 0.48
258 TestPause/serial/VerifyStatus 0.14
259 TestPause/serial/Unpause 0.51
260 TestPause/serial/PauseAgain 0.55
261 TestPause/serial/DeletePaused 5.26
262 TestPause/serial/VerifyDeletedResources 0.17
263 TestNetworkPlugins/group/auto/Start 59.63
264 TestNoKubernetes/serial/StartWithStopK8s 7.72
265 TestNoKubernetes/serial/Start 18.44
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.12
267 TestNoKubernetes/serial/ProfileList 0.5
268 TestNoKubernetes/serial/Stop 2.2
269 TestNetworkPlugins/group/auto/KubeletFlags 0.14
270 TestNetworkPlugins/group/auto/NetCatPod 13.4
271 TestNoKubernetes/serial/StartNoArgs 15.76
272 TestNetworkPlugins/group/auto/DNS 0.12
273 TestNetworkPlugins/group/auto/Localhost 0.1
274 TestNetworkPlugins/group/auto/HairPin 0.11
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.11
276 TestNetworkPlugins/group/calico/Start 72.35
277 TestNetworkPlugins/group/custom-flannel/Start 61.52
278 TestNetworkPlugins/group/calico/ControllerPod 5.01
279 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.14
280 TestNetworkPlugins/group/custom-flannel/NetCatPod 16.21
281 TestNetworkPlugins/group/calico/KubeletFlags 0.14
282 TestNetworkPlugins/group/calico/NetCatPod 14.2
283 TestNetworkPlugins/group/custom-flannel/DNS 0.12
284 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
285 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
286 TestNetworkPlugins/group/calico/DNS 0.12
287 TestNetworkPlugins/group/calico/Localhost 0.1
288 TestNetworkPlugins/group/calico/HairPin 0.11
289 TestNetworkPlugins/group/false/Start 60.25
290 TestNetworkPlugins/group/kindnet/Start 77.1
291 TestNetworkPlugins/group/false/KubeletFlags 0.15
292 TestNetworkPlugins/group/false/NetCatPod 15.19
293 TestNetworkPlugins/group/false/DNS 0.12
294 TestNetworkPlugins/group/false/Localhost 0.11
295 TestNetworkPlugins/group/false/HairPin 0.1
296 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.16
298 TestNetworkPlugins/group/kindnet/NetCatPod 16.22
300 TestNetworkPlugins/group/kindnet/DNS 0.13
301 TestNetworkPlugins/group/kindnet/Localhost 0.11
302 TestNetworkPlugins/group/kindnet/HairPin 0.1
303 TestNetworkPlugins/group/enable-default-cni/Start 56.94
304 TestNetworkPlugins/group/bridge/Start 96.77
305 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.14
306 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.19
307 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
308 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
309 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
310 TestNetworkPlugins/group/kubenet/Start 54.88
311 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
312 TestNetworkPlugins/group/bridge/NetCatPod 16.2
313 TestNetworkPlugins/group/bridge/DNS 0.13
314 TestNetworkPlugins/group/bridge/Localhost 0.11
315 TestNetworkPlugins/group/bridge/HairPin 0.1
317 TestStartStop/group/old-k8s-version/serial/FirstStart 138.62
318 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
319 TestNetworkPlugins/group/kubenet/NetCatPod 15.22
320 TestNetworkPlugins/group/kubenet/DNS 0.12
321 TestNetworkPlugins/group/kubenet/Localhost 0.11
322 TestNetworkPlugins/group/kubenet/HairPin 0.1
324 TestStartStop/group/no-preload/serial/FirstStart 108.56
325 TestStartStop/group/old-k8s-version/serial/DeployApp 10.31
326 TestStartStop/group/no-preload/serial/DeployApp 9.25
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.64
328 TestStartStop/group/old-k8s-version/serial/Stop 8.22
329 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.65
330 TestStartStop/group/no-preload/serial/Stop 8.21
331 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
332 TestStartStop/group/old-k8s-version/serial/SecondStart 473.72
333 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
334 TestStartStop/group/no-preload/serial/SecondStart 331.38
335 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
336 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
337 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.17
338 TestStartStop/group/no-preload/serial/Pause 1.7
340 TestStartStop/group/embed-certs/serial/FirstStart 63.51
341 TestStartStop/group/embed-certs/serial/DeployApp 9.25
342 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.62
343 TestStartStop/group/embed-certs/serial/Stop 8.24
344 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
345 TestStartStop/group/embed-certs/serial/SecondStart 297.36
346 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
347 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
348 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.17
349 TestStartStop/group/old-k8s-version/serial/Pause 1.71
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.74
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.64
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.24
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
356 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 297.09
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
358 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
359 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
360 TestStartStop/group/embed-certs/serial/Pause 1.75
362 TestStartStop/group/newest-cni/serial/FirstStart 53.39
363 TestStartStop/group/newest-cni/serial/DeployApp 0
364 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.71
365 TestStartStop/group/newest-cni/serial/Stop 8.29
366 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
367 TestStartStop/group/newest-cni/serial/SecondStart 38.7
368 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
369 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
370 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.16
371 TestStartStop/group/newest-cni/serial/Pause 1.86
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
373 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
374 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.17
375 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.86
x
+
TestDownloadOnly/v1.16.0/json-events (38.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-556000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-556000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (38.221357066s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (38.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-556000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-556000: exit status 85 (282.306978ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-556000 | jenkins | v1.29.0 | 07 Mar 23 10:00 PST |          |
	|         | -p download-only-556000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/07 10:00:50
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 10:00:50.835720    3905 out.go:296] Setting OutFile to fd 1 ...
	I0307 10:00:50.835907    3905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:00:50.835913    3905 out.go:309] Setting ErrFile to fd 2...
	I0307 10:00:50.835917    3905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:00:50.836018    3905 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
	W0307 10:00:50.836124    3905 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15985-3430/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15985-3430/.minikube/config/config.json: no such file or directory
	I0307 10:00:50.837596    3905 out.go:303] Setting JSON to true
	I0307 10:00:50.856009    3905 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1825,"bootTime":1678210225,"procs":394,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:00:50.856089    3905 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0307 10:00:50.878151    3905 out.go:97] [download-only-556000] minikube v1.29.0 on Darwin 13.2.1
	W0307 10:00:50.878396    3905 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 10:00:50.899633    3905 out.go:169] MINIKUBE_LOCATION=15985
	I0307 10:00:50.878407    3905 notify.go:220] Checking for updates...
	I0307 10:00:50.942647    3905 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:00:50.964040    3905 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:00:50.985741    3905 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:00:51.006764    3905 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	W0307 10:00:51.048473    3905 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 10:00:51.048756    3905 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 10:00:51.132589    3905 out.go:97] Using the hyperkit driver based on user configuration
	I0307 10:00:51.132641    3905 start.go:296] selected driver: hyperkit
	I0307 10:00:51.132668    3905 start.go:857] validating driver "hyperkit" against <nil>
	I0307 10:00:51.132743    3905 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:00:51.132949    3905 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15985-3430/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0307 10:00:51.271427    3905 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.29.0
	I0307 10:00:51.276528    3905 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:00:51.276552    3905 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0307 10:00:51.276586    3905 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0307 10:00:51.280755    3905 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0307 10:00:51.280917    3905 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 10:00:51.280941    3905 cni.go:84] Creating CNI manager for ""
	I0307 10:00:51.280953    3905 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0307 10:00:51.280960    3905 start_flags.go:319] config:
	{Name:download-only-556000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-556000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 10:00:51.281178    3905 iso.go:125] acquiring lock: {Name:mk7e0ac9e85418e0580033b84b7097185a725e89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:00:51.302932    3905 out.go:97] Downloading VM boot image ...
	I0307 10:00:51.303180    3905 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/iso/amd64/minikube-v1.29.0-1677261626-15923-amd64.iso
	I0307 10:01:10.039724    3905 out.go:97] Starting control plane node download-only-556000 in cluster download-only-556000
	I0307 10:01:10.039822    3905 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0307 10:01:10.143818    3905 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0307 10:01:10.143854    3905 cache.go:57] Caching tarball of preloaded images
	I0307 10:01:10.144172    3905 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0307 10:01:10.166012    3905 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0307 10:01:10.166108    3905 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0307 10:01:10.369007    3905 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0307 10:01:22.635653    3905 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0307 10:01:22.635803    3905 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0307 10:01:23.178990    3905 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0307 10:01:23.179208    3905 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/download-only-556000/config.json ...
	I0307 10:01:23.179233    3905 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/download-only-556000/config.json: {Name:mkc625ad591385685e32aa75d98657a1172b230f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 10:01:23.179490    3905 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0307 10:01:23.179744    3905 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-556000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/json-events (21.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-556000 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-556000 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=docker --driver=hyperkit : (21.588809823s)
--- PASS: TestDownloadOnly/v1.26.2/json-events (21.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/preload-exists
--- PASS: TestDownloadOnly/v1.26.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/kubectl
--- PASS: TestDownloadOnly/v1.26.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-556000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-556000: exit status 85 (287.482812ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-556000 | jenkins | v1.29.0 | 07 Mar 23 10:00 PST |          |
	|         | -p download-only-556000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-556000 | jenkins | v1.29.0 | 07 Mar 23 10:01 PST |          |
	|         | -p download-only-556000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/07 10:01:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 10:01:29.340895    3953 out.go:296] Setting OutFile to fd 1 ...
	I0307 10:01:29.341084    3953 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:01:29.341089    3953 out.go:309] Setting ErrFile to fd 2...
	I0307 10:01:29.341093    3953 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:01:29.341195    3953 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
	W0307 10:01:29.341303    3953 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15985-3430/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15985-3430/.minikube/config/config.json: no such file or directory
	I0307 10:01:29.342492    3953 out.go:303] Setting JSON to true
	I0307 10:01:29.360864    3953 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1864,"bootTime":1678210225,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:01:29.360955    3953 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0307 10:01:29.382095    3953 out.go:97] [download-only-556000] minikube v1.29.0 on Darwin 13.2.1
	I0307 10:01:29.382197    3953 notify.go:220] Checking for updates...
	I0307 10:01:29.403155    3953 out.go:169] MINIKUBE_LOCATION=15985
	I0307 10:01:29.424440    3953 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:01:29.446459    3953 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:01:29.468271    3953 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:01:29.490629    3953 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	W0307 10:01:29.533100    3953 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 10:01:29.533788    3953 config.go:182] Loaded profile config "download-only-556000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0307 10:01:29.533869    3953 start.go:765] api.Load failed for download-only-556000: filestore "download-only-556000": Docker machine "download-only-556000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0307 10:01:29.533951    3953 driver.go:365] Setting default libvirt URI to qemu:///system
	W0307 10:01:29.533984    3953 start.go:765] api.Load failed for download-only-556000: filestore "download-only-556000": Docker machine "download-only-556000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0307 10:01:29.562206    3953 out.go:97] Using the hyperkit driver based on existing profile
	I0307 10:01:29.562259    3953 start.go:296] selected driver: hyperkit
	I0307 10:01:29.562281    3953 start.go:857] validating driver "hyperkit" against &{Name:download-only-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-556000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 10:01:29.562542    3953 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:01:29.562753    3953 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15985-3430/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0307 10:01:29.570727    3953 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.29.0
	I0307 10:01:29.574025    3953 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:01:29.574042    3953 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0307 10:01:29.576223    3953 cni.go:84] Creating CNI manager for ""
	I0307 10:01:29.576247    3953 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 10:01:29.576261    3953 start_flags.go:319] config:
	{Name:download-only-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:download-only-556000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 10:01:29.576379    3953 iso.go:125] acquiring lock: {Name:mk7e0ac9e85418e0580033b84b7097185a725e89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 10:01:29.597328    3953 out.go:97] Starting control plane node download-only-556000 in cluster download-only-556000
	I0307 10:01:29.597393    3953 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:01:29.702418    3953 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.2/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0307 10:01:29.702463    3953 cache.go:57] Caching tarball of preloaded images
	I0307 10:01:29.702813    3953 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:01:29.724100    3953 out.go:97] Downloading Kubernetes v1.26.2 preload ...
	I0307 10:01:29.724131    3953 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 ...
	I0307 10:01:29.939002    3953 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.2/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4?checksum=md5:f7b26d32aaabacae8612fb9b9e1a4b89 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0307 10:01:45.214786    3953 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 ...
	I0307 10:01:45.214978    3953 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 ...
	I0307 10:01:45.824901    3953 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0307 10:01:45.824986    3953 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/download-only-556000/config.json ...
	I0307 10:01:45.825365    3953 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0307 10:01:45.825677    3953 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15985-3430/.minikube/cache/darwin/amd64/v1.26.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-556000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.2/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.41s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-556000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.99s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-540000 --alsologtostderr --binary-mirror http://127.0.0.1:49426 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-540000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-540000
--- PASS: TestBinaryMirror (0.99s)

                                                
                                    
x
+
TestOffline (63.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-807000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-807000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (57.920858517s)
helpers_test.go:175: Cleaning up "offline-docker-807000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-807000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-807000: (5.282227212s)
--- PASS: TestOffline (63.20s)

                                                
                                    
x
+
TestAddons/Setup (144.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-251000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-251000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m24.929848701s)
--- PASS: TestAddons/Setup (144.93s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 7.586623ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kcpf7" [8ceab843-c37b-4ee4-a4f8-12053540f592] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008155607s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rm9z7" [d8779968-460e-4ac8-a700-7084e95bd862] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006994712s
addons_test.go:305: (dbg) Run:  kubectl --context addons-251000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-251000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-251000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.230188057s)
addons_test.go:324: (dbg) Run:  out/minikube-darwin-amd64 -p addons-251000 ip
2023/03/07 10:04:34 [DEBUG] GET http://192.168.64.2:5000
addons_test.go:353: (dbg) Run:  out/minikube-darwin-amd64 -p addons-251000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.84s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-251000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-251000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-251000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6bfaf09d-a627-4dd7-a601-08f6d47fc022] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6bfaf09d-a627-4dd7-a601-08f6d47fc022] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.007040164s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-251000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-251000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p addons-251000 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.64.2
addons_test.go:271: (dbg) Run:  out/minikube-darwin-amd64 -p addons-251000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 -p addons-251000 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-darwin-amd64 -p addons-251000 addons disable ingress --alsologtostderr -v=1: (7.386390665s)
--- PASS: TestAddons/parallel/Ingress (19.39s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 1.688293ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-zpskv" [70f5ebde-7908-4abd-af8c-16ab22309df2] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008060305s
addons_test.go:380: (dbg) Run:  kubectl --context addons-251000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-251000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.47s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.4s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.267093ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-g2ndw" [b07f0971-fec9-400e-b45a-e563385646a8] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.011936113s
addons_test.go:438: (dbg) Run:  kubectl --context addons-251000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-251000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.070526553s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-251000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 3.5834ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-251000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-251000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [23a0d72e-e8e9-450a-a1b1-7b2097983811] Pending
helpers_test.go:344: "task-pv-pod" [23a0d72e-e8e9-450a-a1b1-7b2097983811] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [23a0d72e-e8e9-450a-a1b1-7b2097983811] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.007391083s
addons_test.go:549: (dbg) Run:  kubectl --context addons-251000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-251000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-251000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-251000 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-251000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-251000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-251000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-251000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [99d85e78-5b48-4044-97d6-05e8d8691a47] Pending
helpers_test.go:344: "task-pv-pod-restore" [99d85e78-5b48-4044-97d6-05e8d8691a47] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [99d85e78-5b48-4044-97d6-05e8d8691a47] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.009990009s
addons_test.go:591: (dbg) Run:  kubectl --context addons-251000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-251000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-251000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-251000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-251000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.302355735s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-251000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (9.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-251000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-251000 --alsologtostderr -v=1: (1.382413878s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-qwps4" [0076f2f8-8df5-4a01-a375-64bb7ddec1ca] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-qwps4" [0076f2f8-8df5-4a01-a375-64bb7ddec1ca] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.006666497s
--- PASS: TestAddons/parallel/Headlamp (9.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-58d646969f-mfcsn" [cb4ab7ae-8da7-4f1a-9b7e-2d67bbd1f7d1] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006889467s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-251000
--- PASS: TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-251000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-251000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (8.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-251000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-251000: (8.204791982s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-251000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-251000
--- PASS: TestAddons/StoppedEnableDisable (8.56s)

                                                
                                    
x
+
TestCertOptions (45.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-717000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-717000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (41.335403653s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-717000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-717000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-717000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-717000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-717000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-717000: (3.462152279s)
--- PASS: TestCertOptions (45.14s)

                                                
                                    
x
+
TestCertExpiration (254.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-320000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-320000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (39.818630124s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-320000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-320000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (29.879531796s)
helpers_test.go:175: Cleaning up "cert-expiration-320000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-320000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-320000: (5.257388253s)
--- PASS: TestCertExpiration (254.96s)

                                                
                                    
x
+
TestDockerFlags (47.22s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-935000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-935000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (41.659048676s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-935000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-935000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-935000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-935000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-935000: (5.256038131s)
--- PASS: TestDockerFlags (47.22s)

                                                
                                    
x
+
TestForceSystemdFlag (47.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-495000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-495000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (43.56845932s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-495000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-495000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-495000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-495000: (3.477647618s)
--- PASS: TestForceSystemdFlag (47.21s)

                                                
                                    
x
+
TestForceSystemdEnv (48.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-103000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0307 10:39:36.861522    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-103000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (42.706604296s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-103000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-103000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-103000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-103000: (5.258711579s)
--- PASS: TestForceSystemdEnv (48.15s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.08s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.08s)

                                                
                                    
x
+
TestErrorSpam/setup (39.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-764000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-764000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 --driver=hyperkit : (39.048637452s)
--- PASS: TestErrorSpam/setup (39.05s)

                                                
                                    
x
+
TestErrorSpam/start (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 start --dry-run
--- PASS: TestErrorSpam/start (1.31s)

                                                
                                    
x
+
TestErrorSpam/status (0.46s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 status
--- PASS: TestErrorSpam/status (0.46s)

                                                
                                    
x
+
TestErrorSpam/pause (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 pause
--- PASS: TestErrorSpam/pause (1.24s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 unpause
--- PASS: TestErrorSpam/unpause (1.31s)

                                                
                                    
x
+
TestErrorSpam/stop (3.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 stop: (3.22500138s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-764000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-764000 stop
--- PASS: TestErrorSpam/stop (3.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /Users/jenkins/minikube-integration/15985-3430/.minikube/files/etc/test/nested/copy/3903/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-333000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2229: (dbg) Done: out/minikube-darwin-amd64 start -p functional-333000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (59.919804277s)
--- PASS: TestFunctional/serial/StartWithProxy (59.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-333000 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-darwin-amd64 start -p functional-333000 --alsologtostderr -v=8: (39.20455276s)
functional_test.go:658: soft start took 39.205169309s for "functional-333000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-333000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 cache add k8s.gcr.io/pause:3.1: (2.832509749s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 cache add k8s.gcr.io/pause:3.3: (2.59420104s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 cache add k8s.gcr.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 cache add k8s.gcr.io/pause:latest: (2.318289554s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-333000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local3406023731/001
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 cache add minikube-local-cache-test:functional-333000
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 cache delete minikube-local-cache-test:functional-333000
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-333000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-333000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (123.799984ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 cache reload: (1.366147495s)
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 kubectl -- --context functional-333000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-333000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.79s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-333000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0307 10:09:18.286120    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:09:18.292849    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:09:18.304386    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:09:18.325155    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:09:18.366173    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:09:18.446549    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:09:18.606684    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:09:18.927151    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:09:19.567555    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:09:20.848997    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:09:23.409436    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-darwin-amd64 start -p functional-333000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.430865234s)
functional_test.go:756: restart took 47.431026885s for "functional-333000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (47.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-333000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 logs
E0307 10:09:28.529650    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
functional_test.go:1231: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 logs: (2.937636314s)
--- PASS: TestFunctional/serial/LogsCmd (2.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3367681643/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3367681643/001/logs.txt: (2.598118998s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-333000 config get cpus: exit status 14 (66.424964ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-333000 config get cpus: exit status 14 (44.518523ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-333000 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-333000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 5425: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-333000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-333000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (476.646441ms)

                                                
                                                
-- stdout --
	* [functional-333000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:10:30.220411    5396 out.go:296] Setting OutFile to fd 1 ...
	I0307 10:10:30.220582    5396 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:10:30.220587    5396 out.go:309] Setting ErrFile to fd 2...
	I0307 10:10:30.220591    5396 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:10:30.220700    5396 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
	I0307 10:10:30.222033    5396 out.go:303] Setting JSON to false
	I0307 10:10:30.240539    5396 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2405,"bootTime":1678210225,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:10:30.240635    5396 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0307 10:10:30.262482    5396 out.go:177] * [functional-333000] minikube v1.29.0 on Darwin 13.2.1
	I0307 10:10:30.284324    5396 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 10:10:30.284291    5396 notify.go:220] Checking for updates...
	I0307 10:10:30.326200    5396 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:10:30.347296    5396 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:10:30.389381    5396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:10:30.431994    5396 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	I0307 10:10:30.474267    5396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:10:30.496646    5396 config.go:182] Loaded profile config "functional-333000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:10:30.497331    5396 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:10:30.497422    5396 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:10:30.505178    5396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50421
	I0307 10:10:30.505521    5396 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:10:30.505938    5396 main.go:141] libmachine: Using API Version  1
	I0307 10:10:30.505947    5396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:10:30.506172    5396 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:10:30.506269    5396 main.go:141] libmachine: (functional-333000) Calling .DriverName
	I0307 10:10:30.506399    5396 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 10:10:30.506667    5396 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:10:30.506693    5396 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:10:30.513467    5396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50423
	I0307 10:10:30.513790    5396 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:10:30.514158    5396 main.go:141] libmachine: Using API Version  1
	I0307 10:10:30.514176    5396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:10:30.514371    5396 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:10:30.514478    5396 main.go:141] libmachine: (functional-333000) Calling .DriverName
	I0307 10:10:30.542196    5396 out.go:177] * Using the hyperkit driver based on existing profile
	I0307 10:10:30.563326    5396 start.go:296] selected driver: hyperkit
	I0307 10:10:30.563394    5396 start.go:857] validating driver "hyperkit" against &{Name:functional-333000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.26.2 ClusterName:functional-333000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 10:10:30.563578    5396 start.go:868] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:10:30.587895    5396 out.go:177] 
	W0307 10:10:30.609176    5396 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0307 10:10:30.630169    5396 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-333000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-333000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-333000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (450.795347ms)

                                                
                                                
-- stdout --
	* [functional-333000] minikube v1.29.0 sur Darwin 13.2.1
	  - MINIKUBE_LOCATION=15985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:10:31.104748    5412 out.go:296] Setting OutFile to fd 1 ...
	I0307 10:10:31.105153    5412 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:10:31.105163    5412 out.go:309] Setting ErrFile to fd 2...
	I0307 10:10:31.105170    5412 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:10:31.105470    5412 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
	I0307 10:10:31.107305    5412 out.go:303] Setting JSON to false
	I0307 10:10:31.126196    5412 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2406,"bootTime":1678210225,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0307 10:10:31.126302    5412 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0307 10:10:31.148345    5412 out.go:177] * [functional-333000] minikube v1.29.0 sur Darwin 13.2.1
	I0307 10:10:31.191246    5412 out.go:177]   - MINIKUBE_LOCATION=15985
	I0307 10:10:31.191235    5412 notify.go:220] Checking for updates...
	I0307 10:10:31.213320    5412 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	I0307 10:10:31.235287    5412 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0307 10:10:31.256158    5412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 10:10:31.277277    5412 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	I0307 10:10:31.298319    5412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 10:10:31.319623    5412 config.go:182] Loaded profile config "functional-333000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:10:31.320333    5412 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:10:31.320400    5412 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:10:31.327914    5412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50431
	I0307 10:10:31.328252    5412 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:10:31.328686    5412 main.go:141] libmachine: Using API Version  1
	I0307 10:10:31.328697    5412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:10:31.328902    5412 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:10:31.328993    5412 main.go:141] libmachine: (functional-333000) Calling .DriverName
	I0307 10:10:31.329117    5412 driver.go:365] Setting default libvirt URI to qemu:///system
	I0307 10:10:31.329376    5412 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:10:31.329395    5412 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:10:31.335994    5412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50433
	I0307 10:10:31.336321    5412 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:10:31.336667    5412 main.go:141] libmachine: Using API Version  1
	I0307 10:10:31.336683    5412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:10:31.336894    5412 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:10:31.336997    5412 main.go:141] libmachine: (functional-333000) Calling .DriverName
	I0307 10:10:31.364281    5412 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0307 10:10:31.406140    5412 start.go:296] selected driver: hyperkit
	I0307 10:10:31.406164    5412 start.go:857] validating driver "hyperkit" against &{Name:functional-333000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.26.2 ClusterName:functional-333000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0307 10:10:31.406359    5412 start.go:868] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 10:10:31.431171    5412 out.go:177] 
	W0307 10:10:31.452257    5412 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 10:10:31.473000    5412 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 status
functional_test.go:855: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-333000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-333000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-8fp4s" [cd0a8bc1-8178-466b-bd6e-20e86ef4f6d7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-8fp4s" [cd0a8bc1-8178-466b-bd6e-20e86ef4f6d7] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008756633s
functional_test.go:1647: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.64.4:30564
functional_test.go:1673: http://192.168.64.4:30564: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-8fp4s

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.64.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.64.4:30564
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [52e74a4a-3aac-498c-b8ff-4b652d5944d2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006758776s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-333000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-333000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-333000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-333000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bc1cfa35-e845-4c38-8617-5a35a13934d7] Pending
helpers_test.go:344: "sp-pod" [bc1cfa35-e845-4c38-8617-5a35a13934d7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bc1cfa35-e845-4c38-8617-5a35a13934d7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008953644s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-333000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-333000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-333000 delete -f testdata/storage-provisioner/pod.yaml: (1.087853041s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-333000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a471a05d-b55f-459b-a37e-9e04979b314d] Pending
helpers_test.go:344: "sp-pod" [a471a05d-b55f-459b-a37e-9e04979b314d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a471a05d-b55f-459b-a37e-9e04979b314d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005777502s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-333000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh -n functional-333000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 cp functional-333000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd2519176857/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh -n functional-333000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-333000 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-4gt7d" [0aae1431-ccbb-4ec5-aab1-d8112d2d8e6d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0307 10:09:38.769936    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
helpers_test.go:344: "mysql-888f84dd9-4gt7d" [0aae1431-ccbb-4ec5-aab1-d8112d2d8e6d] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.020500144s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-333000 exec mysql-888f84dd9-4gt7d -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-333000 exec mysql-888f84dd9-4gt7d -- mysql -ppassword -e "show databases;": exit status 1 (179.454106ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-333000 exec mysql-888f84dd9-4gt7d -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-333000 exec mysql-888f84dd9-4gt7d -- mysql -ppassword -e "show databases;": exit status 1 (145.818352ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-333000 exec mysql-888f84dd9-4gt7d -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/3903/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "sudo cat /etc/test/nested/copy/3903/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/3903.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "sudo cat /etc/ssl/certs/3903.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/3903.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "sudo cat /usr/share/ca-certificates/3903.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/39032.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "sudo cat /etc/ssl/certs/39032.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/39032.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "sudo cat /usr/share/ca-certificates/39032.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-333000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-333000 ssh "sudo systemctl is-active crio": exit status 1 (111.5622ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image ls --format short
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-333000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-333000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-333000
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image ls --format table
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-333000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-apiserver              | v1.26.2           | 63d3239c3c159 | 134MB  |
| registry.k8s.io/kube-scheduler              | v1.26.2           | db8f409d9a5d7 | 56.3MB |
| registry.k8s.io/kube-proxy                  | v1.26.2           | 6f64e7135a6ec | 65.6MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-333000 | ffd4cfbbe753e | 32.9MB |
| docker.io/localhost/my-image                | functional-333000 | ad01c318b66d3 | 1.24MB |
| registry.k8s.io/kube-controller-manager     | v1.26.2           | 240e201d5b0d8 | 123MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-333000 | b7bc5d246068d | 30B    |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 904b8cb13b932 | 142MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
E0307 10:10:40.210882    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
2023/03/07 10:10:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image ls --format json
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-333000 image ls --format json:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.2"],"size":"56300000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-333000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b5
0f255901","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.2"],"size":"134000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"6f64e7135a6ec1a
dfb0c12e1864b0e8392facac43717a2c6911550740ab3992d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.2"],"size":"65599999"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ad01c318b66d38248e8a684429068b5702066b6a99f45562bc46f11145fe4b70","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-333000"],"size":"1240000"},{"id":"b7bc5d246068d0f218002293722de836304e9c95e8ef2c0bd5eb9505e8e859b2","repo
Digests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-333000"],"size":"30"},{"id":"240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5780211d973644ac1e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.2"],"size":"123000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image ls --format yaml
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-333000 image ls --format yaml:
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-333000
size: "32900000"
- id: b7bc5d246068d0f218002293722de836304e9c95e8ef2c0bd5eb9505e8e859b2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-333000
size: "30"
- id: 240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5780211d973644ac1e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.2
size: "123000000"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 6f64e7135a6ec1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.2
size: "65599999"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255901
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.2
size: "134000000"
- id: db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.2
size: "56300000"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-333000 ssh pgrep buildkitd: exit status 1 (110.650547ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image build -t localhost/my-image:functional-333000 testdata/build
functional_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 image build -t localhost/my-image:functional-333000 testdata/build: (3.591498485s)
functional_test.go:318: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-333000 image build -t localhost/my-image:functional-333000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 849be55062d4
Removing intermediate container 849be55062d4
---> a9930078b253
Step 3/3 : ADD content.txt /
---> ad01c318b66d
Successfully built ad01c318b66d
Successfully tagged localhost/my-image:functional-333000
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.030691115s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-333000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-333000 docker-env) && out/minikube-darwin-amd64 status -p functional-333000"
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-333000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image load --daemon gcr.io/google-containers/addon-resizer:functional-333000
functional_test.go:353: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 image load --daemon gcr.io/google-containers/addon-resizer:functional-333000: (2.781728034s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image load --daemon gcr.io/google-containers/addon-resizer:functional-333000
functional_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 image load --daemon gcr.io/google-containers/addon-resizer:functional-333000: (1.946861766s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.293841643s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-333000
functional_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image load --daemon gcr.io/google-containers/addon-resizer:functional-333000
functional_test.go:243: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 image load --daemon gcr.io/google-containers/addon-resizer:functional-333000: (3.05622341s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image save gcr.io/google-containers/addon-resizer:functional-333000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:378: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 image save gcr.io/google-containers/addon-resizer:functional-333000 /Users/jenkins/workspace/addon-resizer-save.tar: (1.119278547s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image rm gcr.io/google-containers/addon-resizer:functional-333000
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:407: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.271337194s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-333000
functional_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 image save --daemon gcr.io/google-containers/addon-resizer:functional-333000
functional_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p functional-333000 image save --daemon gcr.io/google-containers/addon-resizer:functional-333000: (1.927404308s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-333000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-333000 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-333000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-w7t6c" [1762c5dd-3d88-4f67-a854-318d81ec2c97] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-w7t6c" [1762c5dd-3d88-4f67-a854-318d81ec2c97] Running
E0307 10:09:59.251342    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.006136388s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-333000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-333000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bb734eeb-7eb2-4bd0-abf5-c90e117caef3] Pending
helpers_test.go:344: "nginx-svc" [bb734eeb-7eb2-4bd0-abf5-c90e117caef3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bb734eeb-7eb2-4bd0-abf5-c90e117caef3] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.006870454s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 service list -o json
functional_test.go:1492: Took "355.875667ms" to run "out/minikube-darwin-amd64 -p functional-333000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.64.4:32353
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.64.4:32353
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-333000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.104.75.201 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:254: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:262: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:286: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:294: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:359: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-333000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1313: Took "202.72507ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1327: Took "67.469101ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1364: Took "191.703965ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1377: Took "68.436235ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-333000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3760815407/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1678212621103077000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3760815407/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1678212621103077000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3760815407/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1678212621103077000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3760815407/001/test-1678212621103077000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (141.761333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  7 18:10 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  7 18:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  7 18:10 test-1678212621103077000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh cat /mount-9p/test-1678212621103077000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-333000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [64de6cc4-b649-46e6-bb2a-a861f280825c] Pending
helpers_test.go:344: "busybox-mount" [64de6cc4-b649-46e6-bb2a-a861f280825c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [64de6cc4-b649-46e6-bb2a-a861f280825c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [64de6cc4-b649-46e6-bb2a-a861f280825c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.011168149s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-333000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-333000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3760815407/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-333000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2178324670/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (143.31303ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-333000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2178324670/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-333000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-333000 ssh "sudo umount -f /mount-9p": exit status 1 (111.692334ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-333000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-333000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2178324670/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-333000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-333000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-333000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-160000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-160000: (2.074454587s)
--- PASS: TestImageBuild/serial/NormalBuild (2.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-160000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-160000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.30s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-160000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.24s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (76.3s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-125000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E0307 10:12:02.129778    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-125000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m16.301444279s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (76.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-125000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-125000 addons enable ingress --alsologtostderr -v=5: (13.758512804s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.76s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-125000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-125000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-125000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.03295259s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-125000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-125000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8bdb5cec-96bb-436e-ac75-b5a2ef2e90fc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8bdb5cec-96bb-436e-ac75-b5a2ef2e90fc] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.007328553s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-125000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-125000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-125000 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.64.6
addons_test.go:271: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-125000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-125000 addons disable ingress-dns --alsologtostderr -v=1: (7.650596121s)
addons_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-125000 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-125000 addons disable ingress --alsologtostderr -v=1: (7.199683438s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-267000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0307 10:14:18.280541    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:14:36.726270    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:36.731616    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:36.741838    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:36.763837    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:36.804066    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:36.884962    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:37.045429    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:37.365938    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:38.006137    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:39.286351    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:41.846453    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:14:45.968660    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:14:46.968579    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-267000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (56.973791529s)
--- PASS: TestJSONOutput/start/Command (56.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-267000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-267000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.15s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-267000 --output=json --user=testUser
E0307 10:14:57.210795    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-267000 --output=json --user=testUser: (8.149976798s)
--- PASS: TestJSONOutput/stop/Command (8.15s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-990000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-990000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (346.938935ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"277bb079-8d16-4a20-9ba7-b1189776065a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-990000] minikube v1.29.0 on Darwin 13.2.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4395271-da4e-4c66-af6d-3bf2266f38ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15985"}}
	{"specversion":"1.0","id":"f44d79e3-e1e6-4bc6-a473-159d522683b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig"}}
	{"specversion":"1.0","id":"03d03c2d-1001-42c0-ae7c-e17ad67632f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"57679090-69f4-4daa-b29a-43835874050d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6149cb85-6509-4e8a-9ea0-fc8b5d3ffc1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube"}}
	{"specversion":"1.0","id":"d4a43bc2-6fad-44d3-b9f7-ce467a130202","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"379111a6-dfd4-4f5d-ba81-5d6d41c4d156","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-990000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-990000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (93.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-325000 --driver=hyperkit 
E0307 10:15:17.691520    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-325000 --driver=hyperkit : (42.382827981s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-327000 --driver=hyperkit 
E0307 10:15:58.650900    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-327000 --driver=hyperkit : (39.57034475s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-325000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-327000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-327000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-327000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-327000: (5.2750159s)
helpers_test.go:175: Cleaning up "first-325000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-325000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-325000: (5.273733108s)
--- PASS: TestMinikubeProfile (93.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (15.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-995000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-995000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (14.495346601s)
--- PASS: TestMountStart/serial/StartWithMountFirst (15.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-995000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-995000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (15.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-021000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-021000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (14.443525182s)
--- PASS: TestMountStart/serial/StartWithMountSecond (15.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-021000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-021000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.37s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-995000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-995000 --alsologtostderr -v=5: (2.365506508s)
--- PASS: TestMountStart/serial/DeleteFirst (2.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-021000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-021000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-021000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-021000: (2.201626638s)
--- PASS: TestMountStart/serial/Stop (2.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (40.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-021000
E0307 10:17:20.569828    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-021000: (39.751413328s)
--- PASS: TestMountStart/serial/RestartStopped (40.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-021000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-021000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-260000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0307 10:18:08.335127    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:08.340574    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:08.352630    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:08.372945    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:08.413553    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:08.495647    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:08.655803    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:08.976490    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:09.616658    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:10.897230    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:13.457980    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:18.578454    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:28.819389    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:18:49.299844    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:19:18.409083    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-260000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m35.268956668s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- rollout status deployment/busybox
E0307 10:19:30.262365    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-260000 -- rollout status deployment/busybox: (3.769530413s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:503: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- exec busybox-6b86dd6d48-dmrds -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- exec busybox-6b86dd6d48-tw9p8 -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- exec busybox-6b86dd6d48-dmrds -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- exec busybox-6b86dd6d48-tw9p8 -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- exec busybox-6b86dd6d48-dmrds -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- exec busybox-6b86dd6d48-tw9p8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- exec busybox-6b86dd6d48-dmrds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- exec busybox-6b86dd6d48-dmrds -- sh -c "ping -c 1 192.168.64.1"
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- exec busybox-6b86dd6d48-tw9p8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-260000 -- exec busybox-6b86dd6d48-tw9p8 -- sh -c "ping -c 1 192.168.64.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (407s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-260000 -v 3 --alsologtostderr
E0307 10:19:36.857211    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:20:04.542643    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:20:52.183753    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:23:08.335704    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:23:36.025690    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:24:18.410602    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:24:36.857229    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:25:41.461049    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-260000 -v 3 --alsologtostderr: (6m46.707101754s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (407.00s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (4.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp testdata/cp-test.txt multinode-260000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp multinode-260000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile946595065/001/cp-test_multinode-260000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp multinode-260000:/home/docker/cp-test.txt multinode-260000-m02:/home/docker/cp-test_multinode-260000_multinode-260000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m02 "sudo cat /home/docker/cp-test_multinode-260000_multinode-260000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp multinode-260000:/home/docker/cp-test.txt multinode-260000-m03:/home/docker/cp-test_multinode-260000_multinode-260000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m03 "sudo cat /home/docker/cp-test_multinode-260000_multinode-260000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp testdata/cp-test.txt multinode-260000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp multinode-260000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile946595065/001/cp-test_multinode-260000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp multinode-260000-m02:/home/docker/cp-test.txt multinode-260000:/home/docker/cp-test_multinode-260000-m02_multinode-260000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000 "sudo cat /home/docker/cp-test_multinode-260000-m02_multinode-260000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp multinode-260000-m02:/home/docker/cp-test.txt multinode-260000-m03:/home/docker/cp-test_multinode-260000-m02_multinode-260000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m03 "sudo cat /home/docker/cp-test_multinode-260000-m02_multinode-260000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp testdata/cp-test.txt multinode-260000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp multinode-260000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile946595065/001/cp-test_multinode-260000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp multinode-260000-m03:/home/docker/cp-test.txt multinode-260000:/home/docker/cp-test_multinode-260000-m03_multinode-260000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000 "sudo cat /home/docker/cp-test_multinode-260000-m03_multinode-260000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 cp multinode-260000-m03:/home/docker/cp-test.txt multinode-260000-m02:/home/docker/cp-test_multinode-260000-m03_multinode-260000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 ssh -n multinode-260000-m02 "sudo cat /home/docker/cp-test_multinode-260000-m03_multinode-260000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (4.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-260000 node stop m03: (2.241334934s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-260000 status: exit status 7 (230.804117ms)

                                                
                                                
-- stdout --
	multinode-260000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-260000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-260000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-260000 status --alsologtostderr: exit status 7 (232.208793ms)

                                                
                                                
-- stdout --
	multinode-260000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-260000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-260000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:26:29.491461    6946 out.go:296] Setting OutFile to fd 1 ...
	I0307 10:26:29.491655    6946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:26:29.491660    6946 out.go:309] Setting ErrFile to fd 2...
	I0307 10:26:29.491664    6946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:26:29.491767    6946 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
	I0307 10:26:29.491962    6946 out.go:303] Setting JSON to false
	I0307 10:26:29.491985    6946 mustload.go:65] Loading cluster: multinode-260000
	I0307 10:26:29.492036    6946 notify.go:220] Checking for updates...
	I0307 10:26:29.492281    6946 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:26:29.492296    6946 status.go:255] checking status of multinode-260000 ...
	I0307 10:26:29.492690    6946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:26:29.492740    6946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:26:29.499685    6946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51562
	I0307 10:26:29.500017    6946 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:26:29.500443    6946 main.go:141] libmachine: Using API Version  1
	I0307 10:26:29.500481    6946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:26:29.500673    6946 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:26:29.500782    6946 main.go:141] libmachine: (multinode-260000) Calling .GetState
	I0307 10:26:29.500866    6946 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:26:29.500933    6946 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid from json: 6235
	I0307 10:26:29.502015    6946 status.go:330] multinode-260000 host status = "Running" (err=<nil>)
	I0307 10:26:29.502030    6946 host.go:66] Checking if "multinode-260000" exists ...
	I0307 10:26:29.502265    6946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:26:29.502291    6946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:26:29.509022    6946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51564
	I0307 10:26:29.509343    6946 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:26:29.509710    6946 main.go:141] libmachine: Using API Version  1
	I0307 10:26:29.509729    6946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:26:29.509924    6946 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:26:29.510019    6946 main.go:141] libmachine: (multinode-260000) Calling .GetIP
	I0307 10:26:29.510105    6946 host.go:66] Checking if "multinode-260000" exists ...
	I0307 10:26:29.510376    6946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:26:29.510401    6946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:26:29.516926    6946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51566
	I0307 10:26:29.517251    6946 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:26:29.517578    6946 main.go:141] libmachine: Using API Version  1
	I0307 10:26:29.517594    6946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:26:29.517786    6946 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:26:29.517869    6946 main.go:141] libmachine: (multinode-260000) Calling .DriverName
	I0307 10:26:29.518005    6946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 10:26:29.518031    6946 main.go:141] libmachine: (multinode-260000) Calling .GetSSHHostname
	I0307 10:26:29.518100    6946 main.go:141] libmachine: (multinode-260000) Calling .GetSSHPort
	I0307 10:26:29.518171    6946 main.go:141] libmachine: (multinode-260000) Calling .GetSSHKeyPath
	I0307 10:26:29.518245    6946 main.go:141] libmachine: (multinode-260000) Calling .GetSSHUsername
	I0307 10:26:29.518322    6946 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000/id_rsa Username:docker}
	I0307 10:26:29.562361    6946 ssh_runner.go:195] Run: systemctl --version
	I0307 10:26:29.566997    6946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:26:29.576713    6946 kubeconfig.go:92] found "multinode-260000" server: "https://192.168.64.12:8443"
	I0307 10:26:29.576735    6946 api_server.go:165] Checking apiserver status ...
	I0307 10:26:29.576774    6946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 10:26:29.584988    6946 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1842/cgroup
	I0307 10:26:29.590731    6946 api_server.go:181] apiserver freezer: "11:freezer:/kubepods/burstable/pod76402f877907c95a3936143f580968be/3e9b5dec9e21dbe1d153709409d5c13ef5adf9a1ee14bdc46a36a17d483c4323"
	I0307 10:26:29.590776    6946 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod76402f877907c95a3936143f580968be/3e9b5dec9e21dbe1d153709409d5c13ef5adf9a1ee14bdc46a36a17d483c4323/freezer.state
	I0307 10:26:29.596815    6946 api_server.go:203] freezer state: "THAWED"
	I0307 10:26:29.596832    6946 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0307 10:26:29.600548    6946 api_server.go:278] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0307 10:26:29.600557    6946 status.go:421] multinode-260000 apiserver status = Running (err=<nil>)
	I0307 10:26:29.600568    6946 status.go:257] multinode-260000 status: &{Name:multinode-260000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:26:29.600579    6946 status.go:255] checking status of multinode-260000-m02 ...
	I0307 10:26:29.601461    6946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:26:29.601482    6946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:26:29.608649    6946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51570
	I0307 10:26:29.608997    6946 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:26:29.609339    6946 main.go:141] libmachine: Using API Version  1
	I0307 10:26:29.609354    6946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:26:29.609539    6946 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:26:29.609636    6946 main.go:141] libmachine: (multinode-260000-m02) Calling .GetState
	I0307 10:26:29.609721    6946 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:26:29.609798    6946 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid from json: 6295
	I0307 10:26:29.610940    6946 status.go:330] multinode-260000-m02 host status = "Running" (err=<nil>)
	I0307 10:26:29.610949    6946 host.go:66] Checking if "multinode-260000-m02" exists ...
	I0307 10:26:29.611198    6946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:26:29.611219    6946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:26:29.618019    6946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51572
	I0307 10:26:29.618354    6946 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:26:29.618660    6946 main.go:141] libmachine: Using API Version  1
	I0307 10:26:29.618670    6946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:26:29.618860    6946 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:26:29.618950    6946 main.go:141] libmachine: (multinode-260000-m02) Calling .GetIP
	I0307 10:26:29.619031    6946 host.go:66] Checking if "multinode-260000-m02" exists ...
	I0307 10:26:29.619287    6946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:26:29.619314    6946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:26:29.625988    6946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51574
	I0307 10:26:29.626353    6946 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:26:29.626691    6946 main.go:141] libmachine: Using API Version  1
	I0307 10:26:29.626705    6946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:26:29.626895    6946 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:26:29.626988    6946 main.go:141] libmachine: (multinode-260000-m02) Calling .DriverName
	I0307 10:26:29.627097    6946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 10:26:29.627109    6946 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHHostname
	I0307 10:26:29.627184    6946 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHPort
	I0307 10:26:29.627257    6946 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHKeyPath
	I0307 10:26:29.627339    6946 main.go:141] libmachine: (multinode-260000-m02) Calling .GetSSHUsername
	I0307 10:26:29.627408    6946 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15985-3430/.minikube/machines/multinode-260000-m02/id_rsa Username:docker}
	I0307 10:26:29.660169    6946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 10:26:29.669092    6946 status.go:257] multinode-260000-m02 status: &{Name:multinode-260000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:26:29.669107    6946 status.go:255] checking status of multinode-260000-m03 ...
	I0307 10:26:29.669369    6946 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:26:29.669391    6946 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:26:29.676169    6946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51577
	I0307 10:26:29.676512    6946 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:26:29.676869    6946 main.go:141] libmachine: Using API Version  1
	I0307 10:26:29.676884    6946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:26:29.677096    6946 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:26:29.677202    6946 main.go:141] libmachine: (multinode-260000-m03) Calling .GetState
	I0307 10:26:29.677298    6946 main.go:141] libmachine: (multinode-260000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:26:29.677372    6946 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid from json: 6711
	I0307 10:26:29.678429    6946 main.go:141] libmachine: (multinode-260000-m03) DBG | hyperkit pid 6711 missing from process table
	I0307 10:26:29.678475    6946 status.go:330] multinode-260000-m03 host status = "Stopped" (err=<nil>)
	I0307 10:26:29.678483    6946 status.go:343] host is not running, skipping remaining checks
	I0307 10:26:29.678496    6946 status.go:257] multinode-260000-m03 status: &{Name:multinode-260000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.71s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-260000 node start m03 --alsologtostderr: (29.159263865s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-260000 node delete m03: (8.456597924s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-260000 stop: (16.333773811s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-260000 status: exit status 7 (63.968681ms)

                                                
                                                
-- stdout --
	multinode-260000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-260000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-260000 status --alsologtostderr: exit status 7 (64.194689ms)

                                                
                                                
-- stdout --
	multinode-260000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-260000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 10:30:43.440042    7274 out.go:296] Setting OutFile to fd 1 ...
	I0307 10:30:43.440223    7274 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:30:43.440228    7274 out.go:309] Setting ErrFile to fd 2...
	I0307 10:30:43.440232    7274 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0307 10:30:43.440330    7274 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15985-3430/.minikube/bin
	I0307 10:30:43.440508    7274 out.go:303] Setting JSON to false
	I0307 10:30:43.440531    7274 mustload.go:65] Loading cluster: multinode-260000
	I0307 10:30:43.440581    7274 notify.go:220] Checking for updates...
	I0307 10:30:43.440814    7274 config.go:182] Loaded profile config "multinode-260000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0307 10:30:43.440828    7274 status.go:255] checking status of multinode-260000 ...
	I0307 10:30:43.441177    7274 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:30:43.441223    7274 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:30:43.448393    7274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51777
	I0307 10:30:43.449198    7274 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:30:43.449638    7274 main.go:141] libmachine: Using API Version  1
	I0307 10:30:43.449655    7274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:30:43.449845    7274 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:30:43.449933    7274 main.go:141] libmachine: (multinode-260000) Calling .GetState
	I0307 10:30:43.450017    7274 main.go:141] libmachine: (multinode-260000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:30:43.450083    7274 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid from json: 7033
	I0307 10:30:43.450893    7274 main.go:141] libmachine: (multinode-260000) DBG | hyperkit pid 7033 missing from process table
	I0307 10:30:43.450924    7274 status.go:330] multinode-260000 host status = "Stopped" (err=<nil>)
	I0307 10:30:43.450931    7274 status.go:343] host is not running, skipping remaining checks
	I0307 10:30:43.450935    7274 status.go:257] multinode-260000 status: &{Name:multinode-260000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 10:30:43.450953    7274 status.go:255] checking status of multinode-260000-m02 ...
	I0307 10:30:43.451201    7274 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0307 10:30:43.451225    7274 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0307 10:30:43.457914    7274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51779
	I0307 10:30:43.458205    7274 main.go:141] libmachine: () Calling .GetVersion
	I0307 10:30:43.458532    7274 main.go:141] libmachine: Using API Version  1
	I0307 10:30:43.458551    7274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0307 10:30:43.458740    7274 main.go:141] libmachine: () Calling .GetMachineName
	I0307 10:30:43.458836    7274 main.go:141] libmachine: (multinode-260000-m02) Calling .GetState
	I0307 10:30:43.458922    7274 main.go:141] libmachine: (multinode-260000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0307 10:30:43.458979    7274 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid from json: 7098
	I0307 10:30:43.459769    7274 main.go:141] libmachine: (multinode-260000-m02) DBG | hyperkit pid 7098 missing from process table
	I0307 10:30:43.459817    7274 status.go:330] multinode-260000-m02 host status = "Stopped" (err=<nil>)
	I0307 10:30:43.459829    7274 status.go:343] host is not running, skipping remaining checks
	I0307 10:30:43.459835    7274 status.go:257] multinode-260000-m02 status: &{Name:multinode-260000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-260000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0307 10:30:59.908011    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-260000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m16.502886132s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-260000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (76.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-260000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-260000-m02 --driver=hyperkit 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-260000-m02 --driver=hyperkit : exit status 14 (403.871934ms)

                                                
                                                
-- stdout --
	* [multinode-260000-m02] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-260000-m02' is duplicated with machine name 'multinode-260000-m02' in profile 'multinode-260000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-260000-m03 --driver=hyperkit 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-260000-m03 --driver=hyperkit : (40.93715471s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-260000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-260000: exit status 80 (275.74673ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-260000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-260000-m03 already exists in multinode-260000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-260000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-260000-m03: (5.292052316s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.95s)

                                                
                                    
x
+
TestPreload (174.4s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-805000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0307 10:33:08.340411    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-805000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m12.619568443s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-805000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-805000 -- docker pull gcr.io/k8s-minikube/busybox: (2.274235041s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-805000
E0307 10:34:18.413555    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-805000: (8.208570988s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-805000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0307 10:34:31.388885    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 10:34:36.861371    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-805000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m25.888354299s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-805000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-805000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-805000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-805000: (5.265912856s)
--- PASS: TestPreload (174.40s)

                                                
                                    
x
+
TestScheduledStopUnix (107.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-170000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-170000 --memory=2048 --driver=hyperkit : (36.398554998s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-170000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-170000 -n scheduled-stop-170000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-170000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-170000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-170000 -n scheduled-stop-170000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-170000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-170000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-170000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-170000: exit status 7 (58.702522ms)

                                                
                                                
-- stdout --
	scheduled-stop-170000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-170000 -n scheduled-stop-170000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-170000 -n scheduled-stop-170000: exit status 7 (54.757634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-170000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-170000
--- PASS: TestScheduledStopUnix (107.75s)

                                                
                                    
x
+
TestSkaffold (83.96s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3323420886 version
skaffold_test.go:63: skaffold version: v2.2.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-962000 --memory=2600 --driver=hyperkit 
E0307 10:38:08.341808    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-962000 --memory=2600 --driver=hyperkit : (41.458893841s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3323420886 run --minikube-profile skaffold-962000 --kube-context skaffold-962000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3323420886 run --minikube-profile skaffold-962000 --kube-context skaffold-962000 --status-check=true --port-forward=false --interactive=false: (19.697082055s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7b49bbffb9-rq4nr" [f9451eec-ac0c-41f0-8c2b-9c75ec63b0a2] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.01309007s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5869d944dc-9nf8d" [da1a8102-01d4-417f-90a5-10e2b855d79f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007652103s
helpers_test.go:175: Cleaning up "skaffold-962000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-962000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-962000: (5.268948733s)
--- PASS: TestSkaffold (83.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (151.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
E0307 10:43:51.724292    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:43:51.730185    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:43:51.741198    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:43:51.761245    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:43:51.801403    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:43:51.881546    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:43:52.041962    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:43:52.363371    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:43:53.003514    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:43:54.283619    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:43:56.844186    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:44:01.965312    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:44:12.205549    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:44:18.416605    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:44:32.687010    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:44:36.862247    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
version_upgrade_test.go:230: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m12.049276003s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-340000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-340000: (2.219872553s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-340000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-340000 status --format={{.Host}}: exit status 7 (55.653643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=hyperkit : (38.536304357s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-340000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (404.13537ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-340000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-340000
	    minikube start -p kubernetes-upgrade-340000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3400002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.2, by running:
	    
	    minikube start -p kubernetes-upgrade-340000 --kubernetes-version=v1.26.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-340000 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=hyperkit : (33.076839918s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-340000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-340000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-340000: (5.264467379s)
--- PASS: TestKubernetesUpgrade (151.66s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15985
- KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3975343035/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3975343035/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3975343035/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3975343035/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.79s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15985
- KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1628843433/001
E0307 10:39:18.415446    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1628843433/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1628843433/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1628843433/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0307 10:45:13.649178    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/Setup (2.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (161.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.2447808524.exe start -p stopped-upgrade-813000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:191: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.2447808524.exe start -p stopped-upgrade-813000 --memory=2200 --vm-driver=hyperkit : (1m30.58371107s)
version_upgrade_test.go:200: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.2447808524.exe -p stopped-upgrade-813000 stop
version_upgrade_test.go:200: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.2447808524.exe -p stopped-upgrade-813000 stop: (8.077801059s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-813000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-813000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m3.096761746s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (161.76s)

                                                
                                    
x
+
TestPause/serial/Start (53.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-017000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E0307 10:46:35.570056    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-017000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (53.721480162s)
--- PASS: TestPause/serial/Start (53.72s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.56s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-017000 --alsologtostderr -v=1 --driver=hyperkit 
E0307 10:47:39.912909    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-017000 --alsologtostderr -v=1 --driver=hyperkit : (52.545042924s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (52.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-813000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-813000: (2.491227507s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-207000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-207000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (555.986717ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-207000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15985-3430/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15985-3430/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-207000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-207000 --driver=hyperkit : (41.004092668s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-207000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.15s)

                                                
                                    
x
+
TestPause/serial/Pause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-017000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.48s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-017000 --output=json --layout=cluster
E0307 10:48:08.379828    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-017000 --output=json --layout=cluster: exit status 2 (139.206744ms)

                                                
                                                
-- stdout --
	{"Name":"pause-017000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-017000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-017000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.51s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.55s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-017000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.55s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-017000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-017000 --alsologtostderr -v=5: (5.260109848s)
--- PASS: TestPause/serial/DeletePaused (5.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (59.629285064s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-207000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-207000 --no-kubernetes --driver=hyperkit : (5.13246193s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-207000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-207000 status -o json: exit status 2 (133.655454ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-207000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-207000
E0307 10:48:51.773412    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-207000: (2.452896105s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (18.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-207000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-207000 --no-kubernetes --driver=hyperkit : (18.443463261s)
--- PASS: TestNoKubernetes/serial/Start (18.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-207000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-207000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (115.40423ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-207000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-207000: (2.198313523s)
--- PASS: TestNoKubernetes/serial/Stop (2.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-713000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-713000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-ktjjn" [d0e45056-d414-448d-af2c-3bdd30ee4fab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0307 10:49:18.465627    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:49:19.459404    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-ktjjn" [d0e45056-d414-448d-af2c-3bdd30ee4fab] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.006412275s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (15.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-207000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-207000 --driver=hyperkit : (15.763584846s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (15.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-713000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-207000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-207000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (112.512231ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E0307 10:49:36.913545    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m12.347187158s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m1.51900225s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d72vg" [1c3a6618-8de5-497d-9058-b72b55960ac5] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.012950727s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-713000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (16.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-713000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-g7lbn" [f69f37c8-752b-41e6-a2f7-2df9b478d978] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-g7lbn" [f69f37c8-752b-41e6-a2f7-2df9b478d978] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 16.008638973s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (16.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-713000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-713000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-vxpzf" [7e828789-48a1-48bf-8ee5-aed37a903860] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-vxpzf" [7e828789-48a1-48bf-8ee5-aed37a903860] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.005186692s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-713000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-713000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (60.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (1m0.248071893s)
--- PASS: TestNetworkPlugins/group/false/Start (60.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m17.096461047s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-713000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-713000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-hd99x" [c0f0deca-4a49-4939-bacb-40c9957dd9fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-hd99x" [c0f0deca-4a49-4939-bacb-40c9957dd9fa] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 15.006085471s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-713000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qmfpd" [a6b13f83-62cb-4446-ab48-32912655041b] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014424011s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-713000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (16.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-713000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-lc4bl" [78932426-86ef-4ebe-8f57-d94fd2f9d901] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-lc4bl" [78932426-86ef-4ebe-8f57-d94fd2f9d901] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 16.007080659s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (16.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-713000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (56.943830198s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0307 10:53:51.777552    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:54:15.277308    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:54:15.283228    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:54:15.294132    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:54:15.315373    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:54:15.355767    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:54:15.437191    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:54:15.598670    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:54:15.918889    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m36.769709551s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-713000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-713000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-crlx9" [a206a237-d147-4791-9e1d-b0031792f8e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0307 10:54:16.559036    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:54:17.841144    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:54:18.469130    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:54:20.402140    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-crlx9" [a206a237-d147-4791-9e1d-b0031792f8e4] Running
E0307 10:54:25.522327    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.00934006s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-713000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (54.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0307 10:54:56.245237    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-713000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (54.884203244s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (54.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-713000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (16.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-713000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-lzrrf" [c8e0b4cf-26cc-4067-bdb4-5cf62b510fda] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-lzrrf" [c8e0b4cf-26cc-4067-bdb4-5cf62b510fda] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 16.007024836s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (16.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-713000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (138.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-848000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-848000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m18.615801118s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (138.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-713000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (15.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-713000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-g4xr4" [3b2e0db1-87c6-4f74-a896-36fc1915812c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0307 10:55:45.804040    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:55:45.809367    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:55:45.820402    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:55:45.841351    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:55:45.883400    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:55:45.963609    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:55:46.123906    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:55:46.445099    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:55:47.085341    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:55:47.184961    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:47.190096    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:47.201812    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:47.222847    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:47.264224    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:47.345434    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:47.506322    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:47.826803    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:48.365760    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:55:48.466923    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:49.748788    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:50.926927    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-g4xr4" [3b2e0db1-87c6-4f74-a896-36fc1915812c] Running
E0307 10:55:52.309482    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:55:56.047882    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 15.006283733s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (15.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-713000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-713000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (108.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-612000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.26.2
E0307 10:56:26.770223    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:56:28.151771    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:56:59.128037    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:57:07.732817    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:57:09.114237    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:57:21.463198    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:21.469121    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:21.480044    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:21.502135    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:21.544260    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:21.624937    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:21.785119    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:22.106634    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:22.746815    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:24.027172    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:26.587750    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:31.709496    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:40.052894    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:40.058480    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:40.068757    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:40.091051    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:40.132505    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:40.214621    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:40.376722    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:40.698144    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:41.338555    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:41.949811    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:57:42.620037    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:45.180583    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:57:50.301023    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-612000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.26.2: (1m48.558235711s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (108.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-848000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [facd3bd6-cd10-4daa-ae8a-59049cc28d0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0307 10:58:00.542383    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [facd3bd6-cd10-4daa-ae8a-59049cc28d0a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.022232761s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-848000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-612000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [863cce2e-09f4-4824-804e-8f570683e180] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0307 10:58:02.430423    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [863cce2e-09f4-4824-804e-8f570683e180] Running
E0307 10:58:08.397616    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.013876194s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-612000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-848000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-848000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-848000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-848000 --alsologtostderr -v=3: (8.219501373s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-612000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-612000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-612000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-612000 --alsologtostderr -v=3: (8.212177859s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-848000 -n old-k8s-version-848000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-848000 -n old-k8s-version-848000: exit status 7 (56.395967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-848000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (473.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-848000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-848000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (7m53.57106286s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-848000 -n old-k8s-version-848000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (473.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-612000 -n no-preload-612000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-612000 -n no-preload-612000: exit status 7 (54.593384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-612000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (331.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-612000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.26.2
E0307 10:58:21.023077    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:58:29.655977    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 10:58:31.035207    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 10:58:43.392087    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 10:58:51.780964    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 10:59:01.524698    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:59:01.984291    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 10:59:15.280851    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:59:16.231575    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:16.237298    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:16.247633    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:16.269784    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:16.311785    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:16.393233    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:16.554912    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:16.877082    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:17.518731    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:18.471540    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 10:59:18.799482    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:21.359715    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:26.480573    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:36.722215    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 10:59:36.919213    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 10:59:42.969726    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 10:59:57.203047    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 11:00:05.314136    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 11:00:07.226233    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:07.231785    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:07.243838    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:07.264807    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:07.305219    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:07.386986    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:07.547656    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:07.868813    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:08.509247    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:09.790611    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:12.350803    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:14.826721    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 11:00:17.473061    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:23.905376    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 11:00:27.713669    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:38.164137    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 11:00:41.464640    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:41.470477    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:41.480864    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:41.501027    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:41.541287    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:41.622907    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:41.783444    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:42.103977    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:42.745497    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:44.026871    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:45.806453    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 11:00:46.588266    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:00:47.186005    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 11:00:48.194079    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:00:51.709921    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:01:01.950336    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:01:13.499781    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 11:01:14.878303    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 11:01:22.432159    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:01:29.155049    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:02:00.085839    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 11:02:03.394797    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:02:21.466258    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 11:02:40.056101    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 11:02:49.156594    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
E0307 11:02:51.076246    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
E0307 11:03:07.747696    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
E0307 11:03:08.401398    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 11:03:25.316575    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-612000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.26.2: (5m31.2304071s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-612000 -n no-preload-612000
E0307 11:03:51.782277    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (331.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lmppb" [22efd0fd-4ef3-46ee-8ef6-6938bd30fa3b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lmppb" [22efd0fd-4ef3-46ee-8ef6-6938bd30fa3b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.012286051s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lmppb" [22efd0fd-4ef3-46ee-8ef6-6938bd30fa3b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004652361s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-612000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-612000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-612000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-612000 -n no-preload-612000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-612000 -n no-preload-612000: exit status 2 (137.264474ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-612000 -n no-preload-612000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-612000 -n no-preload-612000: exit status 2 (138.662827ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-612000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-612000 -n no-preload-612000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-612000 -n no-preload-612000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-009000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.26.2
E0307 11:04:18.473953    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 11:04:19.971543    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 11:04:36.921896    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 11:04:43.928132    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 11:05:07.229068    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-009000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.26.2: (1m3.508831445s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-009000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [296be3ff-a487-4eaf-8d2a-5df554abfbef] Pending
helpers_test.go:344: "busybox" [296be3ff-a487-4eaf-8d2a-5df554abfbef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [296be3ff-a487-4eaf-8d2a-5df554abfbef] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013626189s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-009000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-009000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-009000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-009000 --alsologtostderr -v=3
E0307 11:05:34.920007    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-009000 --alsologtostderr -v=3: (8.237447686s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-009000 -n embed-certs-009000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-009000 -n embed-certs-009000: exit status 7 (94.330299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-009000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-009000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.26.2
E0307 11:05:41.466631    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
E0307 11:05:45.810570    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 11:05:47.189086    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 11:06:09.158425    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-009000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.26.2: (4m57.208791519s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-009000 -n embed-certs-009000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2nwbd" [3e4f5805-8677-4d77-ba0b-125d3e1d61ed] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011096855s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2nwbd" [3e4f5805-8677-4d77-ba0b-125d3e1d61ed] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005712307s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-848000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-848000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-848000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-848000 -n old-k8s-version-848000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-848000 -n old-k8s-version-848000: exit status 2 (143.585432ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-848000 -n old-k8s-version-848000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-848000 -n old-k8s-version-848000: exit status 2 (144.995838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-848000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-848000 -n old-k8s-version-848000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-848000 -n old-k8s-version-848000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-555000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.26.2
E0307 11:07:21.469035    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-555000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.26.2: (55.741311872s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-555000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9b570ef5-94be-4063-9544-cc7a9e3c4bfd] Pending
helpers_test.go:344: "busybox" [9b570ef5-94be-4063-9544-cc7a9e3c4bfd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9b570ef5-94be-4063-9544-cc7a9e3c4bfd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.012925602s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-555000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-555000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-555000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-555000 --alsologtostderr -v=3
E0307 11:07:40.060623    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kindnet-713000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-555000 --alsologtostderr -v=3: (8.238294257s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000: exit status 7 (56.375251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-555000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-555000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.26.2
E0307 11:07:51.452527    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 11:07:59.088021    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:07:59.093457    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:07:59.104289    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:07:59.126207    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:07:59.168199    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:07:59.250373    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:07:59.451084    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:07:59.771256    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:08:00.411668    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:08:01.693564    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:08:02.247104    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:02.252198    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:02.262353    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:02.282405    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:02.322971    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:02.403939    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:02.564379    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:02.885688    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:03.526483    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:04.254152    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:08:04.807139    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:07.367821    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:08.404069    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/ingress-addon-legacy-125000/client.crt: no such file or directory
E0307 11:08:09.382220    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:08:12.488079    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:19.622951    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:08:22.729897    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:40.104751    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:08:43.210290    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:08:51.785358    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/skaffold-962000/client.crt: no such file or directory
E0307 11:09:15.286923    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 11:09:16.236717    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/enable-default-cni-713000/client.crt: no such file or directory
E0307 11:09:18.477653    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/addons-251000/client.crt: no such file or directory
E0307 11:09:21.066194    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:09:24.171290    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
E0307 11:09:36.923807    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0307 11:10:07.231812    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/bridge-713000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-555000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.26.2: (4m56.930093055s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rwjpz" [a14f05ec-3a4c-449e-91ba-b86ce074d013] Running
E0307 11:10:38.336194    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/auto-713000/client.crt: no such file or directory
E0307 11:10:41.469659    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/kubenet-713000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010432597s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rwjpz" [a14f05ec-3a4c-449e-91ba-b86ce074d013] Running
E0307 11:10:42.988493    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/old-k8s-version-848000/client.crt: no such file or directory
E0307 11:10:45.813025    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 11:10:46.092977    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/no-preload-612000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006967815s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-009000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-009000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-009000 --alsologtostderr -v=1
E0307 11:10:47.192318    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-009000 -n embed-certs-009000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-009000 -n embed-certs-009000: exit status 2 (141.318828ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-009000 -n embed-certs-009000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-009000 -n embed-certs-009000: exit status 2 (140.94259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-009000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-009000 -n embed-certs-009000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-009000 -n embed-certs-009000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-408000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-408000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.26.2: (53.389438135s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-408000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-408000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-408000 --alsologtostderr -v=3: (8.291757975s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-408000 -n newest-cni-408000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-408000 -n newest-cni-408000: exit status 7 (55.910667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-408000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-408000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.26.2
E0307 11:12:08.866353    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/calico-713000/client.crt: no such file or directory
E0307 11:12:10.246960    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/custom-flannel-713000/client.crt: no such file or directory
E0307 11:12:21.471922    3903 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15985-3430/.minikube/profiles/false-713000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-408000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.26.2: (38.541315063s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-408000 -n newest-cni-408000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-408000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-408000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-408000 -n newest-cni-408000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-408000 -n newest-cni-408000: exit status 2 (147.194957ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-408000 -n newest-cni-408000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-408000 -n newest-cni-408000: exit status 2 (145.134802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-408000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-408000 -n newest-cni-408000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-408000 -n newest-cni-408000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-wzmpv" [f34263e8-0381-4948-8cf0-d117e6bf1597] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011969053s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-wzmpv" [f34263e8-0381-4948-8cf0-d117e6bf1597] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005474116s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-555000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-555000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-555000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000: exit status 2 (149.886947ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000: exit status 2 (148.949317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-555000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.86s)

                                                
                                    

Test skip (19/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:292: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-713000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-713000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-713000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-713000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-713000"

                                                
                                                
----------------------- debugLogs end: cilium-713000 [took: 5.188797113s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-713000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-713000
--- SKIP: TestNetworkPlugins/group/cilium (5.59s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-766000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-766000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
Copied to clipboard