Test Report: Hyperkit_macOS 16578

                    
                      d4c33ff371b38c9e245a0eee82030d8958ba8577:2023-06-10:29644
                    
                

Test fail (12/316)

x
+
TestMultiNode/serial/FreshStart2Nodes (20.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-826000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-826000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (20.633241432s)

                                                
                                                
-- stdout --
	* [multinode-826000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node multinode-826000 in cluster multinode-826000
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:38:27.333749    3473 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:38:27.333925    3473 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:38:27.333932    3473 out.go:309] Setting ErrFile to fd 2...
	I0610 09:38:27.333938    3473 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:38:27.334053    3473 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:38:27.335469    3473 out.go:303] Setting JSON to false
	I0610 09:38:27.354476    3473 start.go:127] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2277,"bootTime":1686412830,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0610 09:38:27.354567    3473 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:38:27.375962    3473 out.go:177] * [multinode-826000] minikube v1.30.1 on Darwin 13.4
	I0610 09:38:27.434159    3473 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:38:27.434171    3473 notify.go:220] Checking for updates...
	I0610 09:38:27.457028    3473 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:38:27.480128    3473 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 09:38:27.501126    3473 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:38:27.522053    3473 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	I0610 09:38:27.543289    3473 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:38:27.564451    3473 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:38:27.592961    3473 out.go:177] * Using the hyperkit driver based on user configuration
	I0610 09:38:27.635123    3473 start.go:297] selected driver: hyperkit
	I0610 09:38:27.635156    3473 start.go:875] validating driver "hyperkit" against <nil>
	I0610 09:38:27.635178    3473 start.go:886] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:38:27.638596    3473 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:38:27.638708    3473 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16578-1235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 09:38:27.645437    3473 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0610 09:38:27.648787    3473 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:38:27.648805    3473 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 09:38:27.648893    3473 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:38:27.649081    3473 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 09:38:27.649110    3473 cni.go:84] Creating CNI manager for ""
	I0610 09:38:27.649119    3473 cni.go:136] 0 nodes found, recommending kindnet
	I0610 09:38:27.649125    3473 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 09:38:27.649135    3473 start_flags.go:319] config:
	{Name:multinode-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:38:27.649267    3473 iso.go:125] acquiring lock: {Name:mkc028968ad126cece35ec994c5f11699b30bc34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:38:27.690866    3473 out.go:177] * Starting control plane node multinode-826000 in cluster multinode-826000
	I0610 09:38:27.712068    3473 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:38:27.712173    3473 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0610 09:38:27.712205    3473 cache.go:57] Caching tarball of preloaded images
	I0610 09:38:27.712372    3473 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 09:38:27.712389    3473 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:38:27.712852    3473 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/config.json ...
	I0610 09:38:27.712895    3473 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/config.json: {Name:mk96a955df354a5a4a4dd6f4c58a67dc01bf2b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:38:27.713503    3473 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:38:27.713554    3473 start.go:364] acquiring machines lock for multinode-826000: {Name:mk73e5861e2a32aaad6eda5ce405a92c74d96949 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:38:27.713650    3473 start.go:368] acquired machines lock for "multinode-826000" in 81.582µs
	I0610 09:38:27.713693    3473 start.go:93] Provisioning new machine with config: &{Name:multinode-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:38:27.713776    3473 start.go:125] createHost starting for "" (driver="hyperkit")
	I0610 09:38:27.756009    3473 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 09:38:27.756463    3473 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:38:27.756515    3473 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:38:27.764652    3473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50999
	I0610 09:38:27.765038    3473 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:38:27.765488    3473 main.go:141] libmachine: Using API Version  1
	I0610 09:38:27.765500    3473 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:38:27.765728    3473 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:38:27.765830    3473 main.go:141] libmachine: (multinode-826000) Calling .GetMachineName
	I0610 09:38:27.765916    3473 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:38:27.766019    3473 start.go:159] libmachine.API.Create for "multinode-826000" (driver="hyperkit")
	I0610 09:38:27.766040    3473 client.go:168] LocalClient.Create starting
	I0610 09:38:27.766090    3473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem
	I0610 09:38:27.766132    3473 main.go:141] libmachine: Decoding PEM data...
	I0610 09:38:27.766146    3473 main.go:141] libmachine: Parsing certificate...
	I0610 09:38:27.766190    3473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem
	I0610 09:38:27.766219    3473 main.go:141] libmachine: Decoding PEM data...
	I0610 09:38:27.766229    3473 main.go:141] libmachine: Parsing certificate...
	I0610 09:38:27.766244    3473 main.go:141] libmachine: Running pre-create checks...
	I0610 09:38:27.766253    3473 main.go:141] libmachine: (multinode-826000) Calling .PreCreateCheck
	I0610 09:38:27.766322    3473 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:38:27.766502    3473 main.go:141] libmachine: (multinode-826000) Calling .GetConfigRaw
	I0610 09:38:27.766943    3473 main.go:141] libmachine: Creating machine...
	I0610 09:38:27.766951    3473 main.go:141] libmachine: (multinode-826000) Calling .Create
	I0610 09:38:27.767017    3473 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:38:27.767133    3473 main.go:141] libmachine: (multinode-826000) DBG | I0610 09:38:27.767015    3481 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/16578-1235/.minikube
	I0610 09:38:27.767191    3473 main.go:141] libmachine: (multinode-826000) Downloading /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1235/.minikube/cache/iso/amd64/minikube-v1.30.1-1686096373-16019-amd64.iso...
	I0610 09:38:27.936787    3473 main.go:141] libmachine: (multinode-826000) DBG | I0610 09:38:27.936695    3481 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa...
	I0610 09:38:28.007219    3473 main.go:141] libmachine: (multinode-826000) DBG | I0610 09:38:28.007149    3481 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/multinode-826000.rawdisk...
	I0610 09:38:28.007259    3473 main.go:141] libmachine: (multinode-826000) DBG | Writing magic tar header
	I0610 09:38:28.007270    3473 main.go:141] libmachine: (multinode-826000) DBG | Writing SSH key tar header
	I0610 09:38:28.008073    3473 main.go:141] libmachine: (multinode-826000) DBG | I0610 09:38:28.007982    3481 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000 ...
	I0610 09:38:28.321332    3473 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:38:28.321354    3473 main.go:141] libmachine: (multinode-826000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid
	I0610 09:38:28.321369    3473 main.go:141] libmachine: (multinode-826000) DBG | Using UUID 39ebe0dc-07ad-11ee-b579-f01898ef957c
	I0610 09:38:28.441914    3473 main.go:141] libmachine: (multinode-826000) DBG | Generated MAC fa:20:3f:84:ae:92
	I0610 09:38:28.441938    3473 main.go:141] libmachine: (multinode-826000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000
	I0610 09:38:28.441979    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"39ebe0dc-07ad-11ee-b579-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00009f1d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 09:38:28.442013    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"39ebe0dc-07ad-11ee-b579-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00009f1d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 09:38:28.442087    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "39ebe0dc-07ad-11ee-b579-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/multinode-826000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/tty,log=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage,/Users/jenkins/minikube-integration/1657
8-1235/.minikube/machines/multinode-826000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000"}
	I0610 09:38:28.442128    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 39ebe0dc-07ad-11ee-b579-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/multinode-826000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/tty,log=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/console-ring -f kexec,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000"
	I0610 09:38:28.442145    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 09:38:28.444699    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 DEBUG: hyperkit: Pid is 3484
	I0610 09:38:28.445051    3473 main.go:141] libmachine: (multinode-826000) DBG | Attempt 0
	I0610 09:38:28.445061    3473 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:38:28.445124    3473 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3484
	I0610 09:38:28.445914    3473 main.go:141] libmachine: (multinode-826000) DBG | Searching for fa:20:3f:84:ae:92 in /var/db/dhcpd_leases ...
	I0610 09:38:28.445971    3473 main.go:141] libmachine: (multinode-826000) DBG | Found 10 entries in /var/db/dhcpd_leases!
	I0610 09:38:28.445995    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:32:30:6d:e9:c8:b4 ID:1,32:30:6d:e9:c8:b4 Lease:0x6484a701}
	I0610 09:38:28.446004    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:94:da:ab:ab:e2 ID:1,a6:94:da:ab:ab:e2 Lease:0x6484a6eb}
	I0610 09:38:28.446011    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:3a:96:c4:94:8e:b0 ID:1,3a:96:c4:94:8e:b0 Lease:0x6485f81d}
	I0610 09:38:28.446018    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:e6:27:b7:b3:13:83 ID:1,e6:27:b7:b3:13:83 Lease:0x6485f7f9}
	I0610 09:38:28.446027    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ea:f7:ed:fb:5e:ee ID:1,ea:f7:ed:fb:5e:ee Lease:0x6485f7ba}
	I0610 09:38:28.446034    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:c2:ab:cc:f4:2:8a ID:1,c2:ab:cc:f4:2:8a Lease:0x6485f73e}
	I0610 09:38:28.446040    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:7e:c9:b9:4e:e6:61 ID:1,7e:c9:b9:4e:e6:61 Lease:0x6485f6f5}
	I0610 09:38:28.446051    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:2a:80:59:1b:ab:5a ID:1,2a:80:59:1b:ab:5a Lease:0x6485f613}
	I0610 09:38:28.446065    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:4:36:62:66:5d ID:1,ca:4:36:62:66:5d Lease:0x6485f5e7}
	I0610 09:38:28.446074    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ca:e3:b4:f8:a0:57 ID:1,ca:e3:b4:f8:a0:57 Lease:0x6485f4b1}
	I0610 09:38:28.451134    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 09:38:28.505717    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 09:38:28.506467    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 09:38:28.506503    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 09:38:28.506530    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 09:38:28.506549    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 09:38:28.862344    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 09:38:28.862366    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 09:38:28.966386    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 09:38:28.966406    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 09:38:28.966446    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 09:38:28.966466    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 09:38:28.967290    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 09:38:28.967304    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 09:38:30.447641    3473 main.go:141] libmachine: (multinode-826000) DBG | Attempt 1
	I0610 09:38:30.447658    3473 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:38:30.447712    3473 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3484
	I0610 09:38:30.448466    3473 main.go:141] libmachine: (multinode-826000) DBG | Searching for fa:20:3f:84:ae:92 in /var/db/dhcpd_leases ...
	I0610 09:38:30.448517    3473 main.go:141] libmachine: (multinode-826000) DBG | Found 10 entries in /var/db/dhcpd_leases!
	I0610 09:38:30.448528    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:32:30:6d:e9:c8:b4 ID:1,32:30:6d:e9:c8:b4 Lease:0x6484a701}
	I0610 09:38:30.448548    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:94:da:ab:ab:e2 ID:1,a6:94:da:ab:ab:e2 Lease:0x6484a6eb}
	I0610 09:38:30.448557    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:3a:96:c4:94:8e:b0 ID:1,3a:96:c4:94:8e:b0 Lease:0x6485f81d}
	I0610 09:38:30.448578    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:e6:27:b7:b3:13:83 ID:1,e6:27:b7:b3:13:83 Lease:0x6485f7f9}
	I0610 09:38:30.448586    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ea:f7:ed:fb:5e:ee ID:1,ea:f7:ed:fb:5e:ee Lease:0x6485f7ba}
	I0610 09:38:30.448597    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:c2:ab:cc:f4:2:8a ID:1,c2:ab:cc:f4:2:8a Lease:0x6485f73e}
	I0610 09:38:30.448606    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:7e:c9:b9:4e:e6:61 ID:1,7e:c9:b9:4e:e6:61 Lease:0x6485f6f5}
	I0610 09:38:30.448626    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:2a:80:59:1b:ab:5a ID:1,2a:80:59:1b:ab:5a Lease:0x6485f613}
	I0610 09:38:30.448640    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:4:36:62:66:5d ID:1,ca:4:36:62:66:5d Lease:0x6485f5e7}
	I0610 09:38:30.448649    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ca:e3:b4:f8:a0:57 ID:1,ca:e3:b4:f8:a0:57 Lease:0x6485f4b1}
	I0610 09:38:32.449606    3473 main.go:141] libmachine: (multinode-826000) DBG | Attempt 2
	I0610 09:38:32.449620    3473 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:38:32.449659    3473 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3484
	I0610 09:38:32.450436    3473 main.go:141] libmachine: (multinode-826000) DBG | Searching for fa:20:3f:84:ae:92 in /var/db/dhcpd_leases ...
	I0610 09:38:32.450472    3473 main.go:141] libmachine: (multinode-826000) DBG | Found 10 entries in /var/db/dhcpd_leases!
	I0610 09:38:32.450485    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:32:30:6d:e9:c8:b4 ID:1,32:30:6d:e9:c8:b4 Lease:0x6484a701}
	I0610 09:38:32.450495    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:94:da:ab:ab:e2 ID:1,a6:94:da:ab:ab:e2 Lease:0x6484a6eb}
	I0610 09:38:32.450504    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:3a:96:c4:94:8e:b0 ID:1,3a:96:c4:94:8e:b0 Lease:0x6485f81d}
	I0610 09:38:32.450511    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:e6:27:b7:b3:13:83 ID:1,e6:27:b7:b3:13:83 Lease:0x6485f7f9}
	I0610 09:38:32.450519    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ea:f7:ed:fb:5e:ee ID:1,ea:f7:ed:fb:5e:ee Lease:0x6485f7ba}
	I0610 09:38:32.450536    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:c2:ab:cc:f4:2:8a ID:1,c2:ab:cc:f4:2:8a Lease:0x6485f73e}
	I0610 09:38:32.450545    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:7e:c9:b9:4e:e6:61 ID:1,7e:c9:b9:4e:e6:61 Lease:0x6485f6f5}
	I0610 09:38:32.450553    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:2a:80:59:1b:ab:5a ID:1,2a:80:59:1b:ab:5a Lease:0x6485f613}
	I0610 09:38:32.450571    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:4:36:62:66:5d ID:1,ca:4:36:62:66:5d Lease:0x6485f5e7}
	I0610 09:38:32.450585    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ca:e3:b4:f8:a0:57 ID:1,ca:e3:b4:f8:a0:57 Lease:0x6485f4b1}
	I0610 09:38:33.481660    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:33 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 09:38:33.481689    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:33 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 09:38:33.481698    3473 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:38:33 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 09:38:34.452291    3473 main.go:141] libmachine: (multinode-826000) DBG | Attempt 3
	I0610 09:38:34.452316    3473 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:38:34.452398    3473 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3484
	I0610 09:38:34.453119    3473 main.go:141] libmachine: (multinode-826000) DBG | Searching for fa:20:3f:84:ae:92 in /var/db/dhcpd_leases ...
	I0610 09:38:34.453178    3473 main.go:141] libmachine: (multinode-826000) DBG | Found 10 entries in /var/db/dhcpd_leases!
	I0610 09:38:34.453186    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:32:30:6d:e9:c8:b4 ID:1,32:30:6d:e9:c8:b4 Lease:0x6484a701}
	I0610 09:38:34.453197    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:94:da:ab:ab:e2 ID:1,a6:94:da:ab:ab:e2 Lease:0x6484a6eb}
	I0610 09:38:34.453211    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:3a:96:c4:94:8e:b0 ID:1,3a:96:c4:94:8e:b0 Lease:0x6485f81d}
	I0610 09:38:34.453219    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:e6:27:b7:b3:13:83 ID:1,e6:27:b7:b3:13:83 Lease:0x6485f7f9}
	I0610 09:38:34.453228    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ea:f7:ed:fb:5e:ee ID:1,ea:f7:ed:fb:5e:ee Lease:0x6485f7ba}
	I0610 09:38:34.453237    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:c2:ab:cc:f4:2:8a ID:1,c2:ab:cc:f4:2:8a Lease:0x6485f73e}
	I0610 09:38:34.453246    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:7e:c9:b9:4e:e6:61 ID:1,7e:c9:b9:4e:e6:61 Lease:0x6485f6f5}
	I0610 09:38:34.453254    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:2a:80:59:1b:ab:5a ID:1,2a:80:59:1b:ab:5a Lease:0x6485f613}
	I0610 09:38:34.453261    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:4:36:62:66:5d ID:1,ca:4:36:62:66:5d Lease:0x6485f5e7}
	I0610 09:38:34.453278    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ca:e3:b4:f8:a0:57 ID:1,ca:e3:b4:f8:a0:57 Lease:0x6485f4b1}
	I0610 09:38:36.455145    3473 main.go:141] libmachine: (multinode-826000) DBG | Attempt 4
	I0610 09:38:36.455164    3473 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:38:36.455224    3473 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3484
	I0610 09:38:36.455962    3473 main.go:141] libmachine: (multinode-826000) DBG | Searching for fa:20:3f:84:ae:92 in /var/db/dhcpd_leases ...
	I0610 09:38:36.455978    3473 main.go:141] libmachine: (multinode-826000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0610 09:38:36.455995    3473 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:fa:20:3f:84:ae:92 ID:1,fa:20:3f:84:ae:92 Lease:0x6485f88c}
	I0610 09:38:36.456002    3473 main.go:141] libmachine: (multinode-826000) DBG | Found match: fa:20:3f:84:ae:92
	I0610 09:38:36.456008    3473 main.go:141] libmachine: (multinode-826000) DBG | IP: 192.168.64.12
	I0610 09:38:36.456038    3473 main.go:141] libmachine: (multinode-826000) Calling .GetConfigRaw
	I0610 09:38:36.456591    3473 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:38:36.456700    3473 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:38:36.456804    3473 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 09:38:36.456814    3473 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:38:36.456899    3473 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:38:36.456955    3473 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3484
	I0610 09:38:36.457659    3473 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 09:38:36.457674    3473 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 09:38:36.457680    3473 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 09:38:36.457685    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:36.457785    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:36.457893    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:36.458001    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:36.458097    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:36.458229    3473 main.go:141] libmachine: Using SSH client type: native
	I0610 09:38:36.458577    3473 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:38:36.458585    3473 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 09:38:37.535299    3473 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:38:37.535316    3473 main.go:141] libmachine: Detecting the provisioner...
	I0610 09:38:37.535335    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:37.535518    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:37.535608    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:37.535698    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:37.535804    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:37.535952    3473 main.go:141] libmachine: Using SSH client type: native
	I0610 09:38:37.536275    3473 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:38:37.536303    3473 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 09:38:37.612594    3473 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge0c6143-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0610 09:38:37.612670    3473 main.go:141] libmachine: found compatible host: buildroot
	I0610 09:38:37.612677    3473 main.go:141] libmachine: Provisioning with buildroot...
	I0610 09:38:37.612683    3473 main.go:141] libmachine: (multinode-826000) Calling .GetMachineName
	I0610 09:38:37.612827    3473 buildroot.go:166] provisioning hostname "multinode-826000"
	I0610 09:38:37.612836    3473 main.go:141] libmachine: (multinode-826000) Calling .GetMachineName
	I0610 09:38:37.612940    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:37.613047    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:37.613159    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:37.613266    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:37.613388    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:37.613531    3473 main.go:141] libmachine: Using SSH client type: native
	I0610 09:38:37.613842    3473 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:38:37.613851    3473 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-826000 && echo "multinode-826000" | sudo tee /etc/hostname
	I0610 09:38:37.693963    3473 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-826000
	
	I0610 09:38:37.693981    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:37.694118    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:37.694210    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:37.694296    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:37.694392    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:37.694529    3473 main.go:141] libmachine: Using SSH client type: native
	I0610 09:38:37.694832    3473 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:38:37.694845    3473 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-826000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-826000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-826000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:38:37.769734    3473 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:38:37.769753    3473 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1235/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1235/.minikube}
	I0610 09:38:37.769764    3473 buildroot.go:174] setting up certificates
	I0610 09:38:37.769775    3473 provision.go:83] configureAuth start
	I0610 09:38:37.769783    3473 main.go:141] libmachine: (multinode-826000) Calling .GetMachineName
	I0610 09:38:37.769882    3473 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:38:37.769979    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:37.770051    3473 provision.go:138] copyHostCerts
	I0610 09:38:37.770088    3473 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem
	I0610 09:38:37.770148    3473 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem, removing ...
	I0610 09:38:37.770156    3473 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem
	I0610 09:38:37.770318    3473 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem (1078 bytes)
	I0610 09:38:37.770562    3473 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem
	I0610 09:38:37.770600    3473 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem, removing ...
	I0610 09:38:37.770605    3473 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem
	I0610 09:38:37.770674    3473 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem (1123 bytes)
	I0610 09:38:37.771042    3473 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem
	I0610 09:38:37.771087    3473 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem, removing ...
	I0610 09:38:37.771092    3473 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem
	I0610 09:38:37.771161    3473 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem (1679 bytes)
	I0610 09:38:37.771299    3473 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem org=jenkins.multinode-826000 san=[192.168.64.12 192.168.64.12 localhost 127.0.0.1 minikube multinode-826000]
	I0610 09:38:38.062354    3473 provision.go:172] copyRemoteCerts
	I0610 09:38:38.062425    3473 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:38:38.062442    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:38.062578    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:38.062684    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.062784    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:38.062890    3473 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:38:38.105770    3473 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 09:38:38.105878    3473 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:38:38.122161    3473 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 09:38:38.122216    3473 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0610 09:38:38.138387    3473 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 09:38:38.138449    3473 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:38:38.154015    3473 provision.go:86] duration metric: configureAuth took 384.229655ms
	I0610 09:38:38.154028    3473 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:38:38.154164    3473 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:38:38.154176    3473 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:38:38.154327    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:38.154410    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:38.154500    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.154585    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.154676    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:38.154797    3473 main.go:141] libmachine: Using SSH client type: native
	I0610 09:38:38.155091    3473 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:38:38.155099    3473 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:38:38.228253    3473 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:38:38.228267    3473 buildroot.go:70] root file system type: tmpfs
	I0610 09:38:38.228335    3473 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:38:38.228349    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:38.228489    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:38.228572    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.228677    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.228760    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:38.228912    3473 main.go:141] libmachine: Using SSH client type: native
	I0610 09:38:38.229212    3473 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:38:38.229259    3473 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:38:38.310505    3473 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:38:38.310529    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:38.310663    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:38.310759    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.310857    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.310957    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:38.311094    3473 main.go:141] libmachine: Using SSH client type: native
	I0610 09:38:38.311403    3473 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:38:38.311415    3473 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:38:38.799528    3473 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:38:38.799549    3473 main.go:141] libmachine: Checking connection to Docker...
	I0610 09:38:38.799562    3473 main.go:141] libmachine: (multinode-826000) Calling .GetURL
	I0610 09:38:38.799698    3473 main.go:141] libmachine: Docker is up and running!
	I0610 09:38:38.799706    3473 main.go:141] libmachine: Reticulating splines...
	I0610 09:38:38.799710    3473 client.go:171] LocalClient.Create took 11.033704528s
	I0610 09:38:38.799720    3473 start.go:167] duration metric: libmachine.API.Create for "multinode-826000" took 11.033740567s
	I0610 09:38:38.799729    3473 start.go:300] post-start starting for "multinode-826000" (driver="hyperkit")
	I0610 09:38:38.799735    3473 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:38:38.799748    3473 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:38:38.799908    3473 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:38:38.799923    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:38.800011    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:38.800101    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.800210    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:38.800296    3473 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:38:38.844251    3473 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:38:38.846656    3473 command_runner.go:130] > NAME=Buildroot
	I0610 09:38:38.846664    3473 command_runner.go:130] > VERSION=2021.02.12-1-ge0c6143-dirty
	I0610 09:38:38.846668    3473 command_runner.go:130] > ID=buildroot
	I0610 09:38:38.846672    3473 command_runner.go:130] > VERSION_ID=2021.02.12
	I0610 09:38:38.846676    3473 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0610 09:38:38.846854    3473 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:38:38.846862    3473 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1235/.minikube/addons for local assets ...
	I0610 09:38:38.846939    3473 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1235/.minikube/files for local assets ...
	I0610 09:38:38.847103    3473 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem -> 16822.pem in /etc/ssl/certs
	I0610 09:38:38.847109    3473 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem -> /etc/ssl/certs/16822.pem
	I0610 09:38:38.847280    3473 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 09:38:38.853501    3473 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem --> /etc/ssl/certs/16822.pem (1708 bytes)
	I0610 09:38:38.868717    3473 start.go:303] post-start completed in 68.979715ms
	I0610 09:38:38.868750    3473 main.go:141] libmachine: (multinode-826000) Calling .GetConfigRaw
	I0610 09:38:38.869309    3473 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:38:38.869465    3473 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/config.json ...
	I0610 09:38:38.869752    3473 start.go:128] duration metric: createHost completed in 11.15600736s
	I0610 09:38:38.869768    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:38.869859    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:38.869941    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.870016    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.870109    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:38.870220    3473 main.go:141] libmachine: Using SSH client type: native
	I0610 09:38:38.870513    3473 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:38:38.870521    3473 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 09:38:38.942587    3473 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686415118.972883939
	
	I0610 09:38:38.942598    3473 fix.go:207] guest clock: 1686415118.972883939
	I0610 09:38:38.942604    3473 fix.go:220] Guest: 2023-06-10 09:38:38.972883939 -0700 PDT Remote: 2023-06-10 09:38:38.869761 -0700 PDT m=+11.567522724 (delta=103.122939ms)
	I0610 09:38:38.942623    3473 fix.go:191] guest clock delta is within tolerance: 103.122939ms
	I0610 09:38:38.942629    3473 start.go:83] releasing machines lock for "multinode-826000", held for 11.229009233s
	I0610 09:38:38.942646    3473 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:38:38.942778    3473 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:38:38.942874    3473 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:38:38.943201    3473 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:38:38.943305    3473 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:38:38.943388    3473 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:38:38.943415    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:38.943425    3473 ssh_runner.go:195] Run: cat /version.json
	I0610 09:38:38.943435    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:38:38.943526    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:38.943539    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:38:38.943608    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.943634    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:38:38.943705    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:38.943727    3473 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:38:38.943788    3473 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:38:38.943829    3473 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:38:39.025573    3473 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 09:38:39.026604    3473 command_runner.go:130] > {"iso_version": "v1.30.1-1686096373-16019", "kicbase_version": "v0.0.39-1686006988-16632", "minikube_version": "v1.30.1", "commit": "25a6e24452a99fbf54228d85990beeaaccbd5c35"}
	I0610 09:38:39.026751    3473 ssh_runner.go:195] Run: systemctl --version
	I0610 09:38:39.031124    3473 command_runner.go:130] > systemd 247 (247)
	I0610 09:38:39.031141    3473 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0610 09:38:39.031542    3473 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 09:38:39.034965    3473 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 09:38:39.035049    3473 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:38:39.035091    3473 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 09:38:39.044543    3473 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 09:38:39.044566    3473 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:38:39.044574    3473 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:38:39.044657    3473 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:38:39.058584    3473 docker.go:633] Got preloaded images: 
	I0610 09:38:39.058596    3473 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0610 09:38:39.058659    3473 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:38:39.065006    3473 command_runner.go:139] > {"Repositories":{}}
	I0610 09:38:39.065324    3473 ssh_runner.go:195] Run: which lz4
	I0610 09:38:39.067435    3473 command_runner.go:130] > /usr/bin/lz4
	I0610 09:38:39.067601    3473 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0610 09:38:39.067722    3473 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0610 09:38:39.070046    3473 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:38:39.070202    3473 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:38:39.070223    3473 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (412256110 bytes)
	I0610 09:38:40.519523    3473 docker.go:597] Took 1.451869 seconds to copy over tarball
	I0610 09:38:40.519589    3473 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 09:38:44.117701    3473 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.598110114s)
	I0610 09:38:44.117716    3473 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 09:38:44.144081    3473 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:38:44.150207    3473 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.7-0":"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83":"sha256:86b6af7dd652c1b38118be1c338e
9354b33469e69a218f7e290a0ca5304ad681"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.27.2":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370","registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.27.2":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12","registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.27.2":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d
32174dc13e7dee"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.27.2":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0610 09:38:44.150291    3473 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0610 09:38:44.161379    3473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:38:44.242567    3473 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:38:45.602471    3473 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.359885473s)
	I0610 09:38:45.602504    3473 start.go:481] detecting cgroup driver to use...
	I0610 09:38:45.602603    3473 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:38:45.614419    3473 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 09:38:45.614814    3473 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 09:38:45.621254    3473 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:38:45.627670    3473 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:38:45.627707    3473 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:38:45.634121    3473 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:38:45.640518    3473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:38:45.646880    3473 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:38:45.653267    3473 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:38:45.659811    3473 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:38:45.666340    3473 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:38:45.672075    3473 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 09:38:45.672124    3473 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:38:45.678054    3473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:38:45.762857    3473 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:38:45.773803    3473 start.go:481] detecting cgroup driver to use...
	I0610 09:38:45.773876    3473 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:38:45.783029    3473 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 09:38:45.783531    3473 command_runner.go:130] > [Unit]
	I0610 09:38:45.783541    3473 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 09:38:45.783550    3473 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 09:38:45.783555    3473 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 09:38:45.783559    3473 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 09:38:45.783563    3473 command_runner.go:130] > StartLimitBurst=3
	I0610 09:38:45.783567    3473 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 09:38:45.783571    3473 command_runner.go:130] > [Service]
	I0610 09:38:45.783574    3473 command_runner.go:130] > Type=notify
	I0610 09:38:45.783577    3473 command_runner.go:130] > Restart=on-failure
	I0610 09:38:45.783584    3473 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 09:38:45.783590    3473 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 09:38:45.783596    3473 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 09:38:45.783602    3473 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 09:38:45.783609    3473 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 09:38:45.783615    3473 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 09:38:45.783621    3473 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 09:38:45.783628    3473 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 09:38:45.783633    3473 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 09:38:45.783637    3473 command_runner.go:130] > ExecStart=
	I0610 09:38:45.783648    3473 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 09:38:45.783653    3473 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 09:38:45.783659    3473 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 09:38:45.783665    3473 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 09:38:45.783668    3473 command_runner.go:130] > LimitNOFILE=infinity
	I0610 09:38:45.783672    3473 command_runner.go:130] > LimitNPROC=infinity
	I0610 09:38:45.783676    3473 command_runner.go:130] > LimitCORE=infinity
	I0610 09:38:45.783680    3473 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 09:38:45.783685    3473 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 09:38:45.783689    3473 command_runner.go:130] > TasksMax=infinity
	I0610 09:38:45.783693    3473 command_runner.go:130] > TimeoutStartSec=0
	I0610 09:38:45.783698    3473 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 09:38:45.783701    3473 command_runner.go:130] > Delegate=yes
	I0610 09:38:45.783711    3473 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 09:38:45.783717    3473 command_runner.go:130] > KillMode=process
	I0610 09:38:45.783722    3473 command_runner.go:130] > [Install]
	I0610 09:38:45.783730    3473 command_runner.go:130] > WantedBy=multi-user.target
	I0610 09:38:45.783880    3473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:38:45.792806    3473 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:38:45.804389    3473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:38:45.813241    3473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:38:45.821639    3473 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:38:45.841034    3473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:38:45.850000    3473 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:38:45.862186    3473 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 09:38:45.862531    3473 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:38:45.864664    3473 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 09:38:45.864790    3473 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:38:45.870398    3473 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:38:45.881504    3473 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:38:45.964319    3473 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:38:46.055927    3473 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:38:46.055943    3473 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:38:46.067030    3473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:38:46.149878    3473 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:38:47.440336    3473 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.290440937s)
	I0610 09:38:47.440412    3473 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:38:47.525491    3473 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 09:38:47.607918    3473 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:38:47.696783    3473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:38:47.785687    3473 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 09:38:47.796389    3473 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0610 09:38:47.819432    3473 out.go:177] 
	W0610 09:38:47.841144    3473 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0610 09:38:47.841169    3473 out.go:239] * 
	* 
	W0610 09:38:47.842315    3473 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 09:38:47.904804    3473 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-826000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000: exit status 6 (128.266404ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:38:48.058359    3488 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-826000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (20.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (77.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (75.690096ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-826000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- rollout status deployment/busybox: exit status 1 (75.375359ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (75.068206ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (81.623925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (80.604626ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (80.360008ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (83.381877ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (82.215387ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (76.962201ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0610 09:39:20.788345    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:20.794660    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:20.805409    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:20.827523    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:20.869346    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:20.949759    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:21.112054    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:21.433464    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:22.074718    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:23.356957    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (81.802183ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0610 09:39:25.919271    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:31.041511    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:39:41.283707    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (77.508745ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0610 09:40:01.764707    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (81.326196ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (75.32275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- exec  -- nslookup kubernetes.io: exit status 1 (74.258591ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- exec  -- nslookup kubernetes.default: exit status 1 (74.725918ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (74.795916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000: exit status 6 (126.532048ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:40:05.187276    3552 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-826000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (77.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-826000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (76.195319ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000: exit status 6 (124.83145ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:40:05.388790    3560 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-826000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-826000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-826000 -v 3 --alsologtostderr: exit status 119 (187.644228ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-826000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:40:05.436638    3565 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:40:05.436905    3565 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:40:05.436912    3565 out.go:309] Setting ErrFile to fd 2...
	I0610 09:40:05.436916    3565 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:40:05.437039    3565 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:40:05.437377    3565 mustload.go:65] Loading cluster: multinode-826000
	I0610 09:40:05.437637    3565 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:40:05.437967    3565 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:05.438013    3565 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:05.444613    3565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51046
	I0610 09:40:05.444976    3565 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:05.445392    3565 main.go:141] libmachine: Using API Version  1
	I0610 09:40:05.445404    3565 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:05.445634    3565 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:05.445749    3565 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:40:05.445830    3565 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:40:05.445896    3565 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3484
	I0610 09:40:05.446779    3565 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:40:05.447035    3565 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:05.447056    3565 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:05.453592    3565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51048
	I0610 09:40:05.453886    3565 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:05.454230    3565 main.go:141] libmachine: Using API Version  1
	I0610 09:40:05.454244    3565 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:05.454454    3565 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:05.454554    3565 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:05.454641    3565 api_server.go:166] Checking apiserver status ...
	I0610 09:40:05.454698    3565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:40:05.454722    3565 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:05.454811    3565 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:05.454889    3565 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:05.454978    3565 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:05.455059    3565 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	W0610 09:40:05.496525    3565 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:40:05.518600    3565 out.go:177] * This control plane is not running! (state=Stopped)
	W0610 09:40:05.540008    3565 out.go:239] ! This is unusual - you may want to investigate using "minikube logs -p multinode-826000"
	! This is unusual - you may want to investigate using "minikube logs -p multinode-826000"
	I0610 09:40:05.560894    3565 out.go:177]   To start a cluster, run: "minikube start -p multinode-826000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-826000 -v 3 --alsologtostderr" : exit status 119
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000: exit status 6 (126.760559ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:40:05.703644    3569 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-826000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/AddNode (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:155: expected profile "multinode-826000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-826000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-826000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMH
idden\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.2\",\"ClusterName\":\"multinode-826000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":
\"\",\"IP\":\"192.168.64.12\",\"Port\":8443,\"KubernetesVersion\":\"v1.27.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\"},\"Active\
":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000: exit status 6 (125.269528ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:40:05.991790    3579 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-826000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-826000 status --output json --alsologtostderr: exit status 6 (126.247066ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-826000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:40:06.039735    3584 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:40:06.039914    3584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:40:06.039921    3584 out.go:309] Setting ErrFile to fd 2...
	I0610 09:40:06.039925    3584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:40:06.040041    3584 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:40:06.040228    3584 out.go:303] Setting JSON to true
	I0610 09:40:06.040249    3584 mustload.go:65] Loading cluster: multinode-826000
	I0610 09:40:06.040293    3584 notify.go:220] Checking for updates...
	I0610 09:40:06.040508    3584 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:40:06.040528    3584 status.go:255] checking status of multinode-826000 ...
	I0610 09:40:06.040872    3584 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:06.040916    3584 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:06.047668    3584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51072
	I0610 09:40:06.047964    3584 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:06.048382    3584 main.go:141] libmachine: Using API Version  1
	I0610 09:40:06.048413    3584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:06.048621    3584 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:06.048724    3584 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:40:06.048805    3584 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:40:06.048873    3584 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3484
	I0610 09:40:06.049783    3584 status.go:330] multinode-826000 host status = "Running" (err=<nil>)
	I0610 09:40:06.049798    3584 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:40:06.050037    3584 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:06.050075    3584 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:06.056748    3584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51074
	I0610 09:40:06.057063    3584 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:06.057369    3584 main.go:141] libmachine: Using API Version  1
	I0610 09:40:06.057385    3584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:06.057604    3584 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:06.057697    3584 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:40:06.057781    3584 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:40:06.058057    3584 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:06.058087    3584 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:06.064612    3584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51076
	I0610 09:40:06.064921    3584 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:06.065258    3584 main.go:141] libmachine: Using API Version  1
	I0610 09:40:06.065271    3584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:06.065464    3584 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:06.065555    3584 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:06.065685    3584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 09:40:06.065706    3584 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:06.065785    3584 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:06.065862    3584 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:06.065934    3584 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:06.066016    3584 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:40:06.106427    3584 ssh_runner.go:195] Run: systemctl --version
	I0610 09:40:06.109834    3584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0610 09:40:06.118293    3584 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:40:06.118316    3584 api_server.go:166] Checking apiserver status ...
	I0610 09:40:06.118354    3584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:40:06.125486    3584 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:40:06.125499    3584 status.go:421] multinode-826000 apiserver status = Stopped (err=<nil>)
	I0610 09:40:06.125515    3584 status.go:257] multinode-826000 status: &{Name:multinode-826000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:175: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-826000 status --output json --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000: exit status 6 (125.405907ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:40:06.243916    3589 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-826000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/CopyFile (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-826000 node stop m03: exit status 85 (133.927928ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-826000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-826000 status: exit status 6 (126.242162ms)

                                                
                                                
-- stdout --
	multinode-826000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:40:06.504047    3596 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig

                                                
                                                
** /stderr **
multinode_test.go:219: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-826000 status" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000: exit status 6 (126.897558ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:40:06.631138    3601 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-826000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StopNode (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-826000 node start m03 --alsologtostderr: exit status 85 (131.41249ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:40:06.679758    3606 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:40:06.680051    3606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:40:06.680059    3606 out.go:309] Setting ErrFile to fd 2...
	I0610 09:40:06.680063    3606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:40:06.680176    3606 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:40:06.680518    3606 mustload.go:65] Loading cluster: multinode-826000
	I0610 09:40:06.680795    3606 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:40:06.702851    3606 out.go:177] 
	W0610 09:40:06.724152    3606 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0610 09:40:06.724176    3606 out.go:239] * 
	* 
	W0610 09:40:06.727861    3606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 09:40:06.749080    3606 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0610 09:40:06.679758    3606 out.go:296] Setting OutFile to fd 1 ...
I0610 09:40:06.680051    3606 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:40:06.680059    3606 out.go:309] Setting ErrFile to fd 2...
I0610 09:40:06.680063    3606 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:40:06.680176    3606 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
I0610 09:40:06.680518    3606 mustload.go:65] Loading cluster: multinode-826000
I0610 09:40:06.680795    3606 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:40:06.702851    3606 out.go:177] 
W0610 09:40:06.724152    3606 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0610 09:40:06.724176    3606 out.go:239] * 
* 
W0610 09:40:06.727861    3606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 09:40:06.749080    3606 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-826000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-826000 status: exit status 6 (132.063395ms)

                                                
                                                
-- stdout --
	multinode-826000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:40:06.891016    3608 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig

                                                
                                                
** /stderr **
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-826000 status" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000: exit status 6 (128.474693ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:40:07.023898    3613 status.go:415] kubeconfig endpoint: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-826000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-826000 node delete m03: exit status 80 (230.397154ms)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster multinode-826000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_DELETE: deleting node: retrieve: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-826000 node delete m03": exit status 80
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 status --alsologtostderr
multinode_test.go:406: status says both hosts are not running: args "out/minikube-darwin-amd64 -p multinode-826000 status --alsologtostderr": multinode-826000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode_test.go:410: status says both kubelets are not running: args "out/minikube-darwin-amd64 -p multinode-826000 status --alsologtostderr": multinode-826000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:437: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-826000 logs -n 25: (1.994211529s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-826000 -- apply -f                   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- rollout                    | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | status deployment/busybox                         |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- exec                       | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | -- nslookup kubernetes.io                         |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- exec                       | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | -- nslookup kubernetes.default                    |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000                               | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | -- exec  -- nslookup                              |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| node    | add -p multinode-826000 -v 3                      | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	| node    | multinode-826000 node stop m03                    | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	| node    | multinode-826000 node start                       | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | m03 --alsologtostderr                             |                  |         |         |                     |                     |
	| node    | list -p multinode-826000                          | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	| stop    | -p multinode-826000                               | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT | 10 Jun 23 09:40 PDT |
	| start   | -p multinode-826000                               | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT | 10 Jun 23 09:41 PDT |
	|         | --wait=true -v=8                                  |                  |         |         |                     |                     |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	| node    | list -p multinode-826000                          | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:41 PDT |                     |
	| node    | multinode-826000 node delete                      | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:41 PDT |                     |
	|         | m03                                               |                  |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:40:15
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:40:15.350510    3624 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:40:15.350706    3624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:40:15.350712    3624 out.go:309] Setting ErrFile to fd 2...
	I0610 09:40:15.350716    3624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:40:15.350862    3624 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:40:15.352262    3624 out.go:303] Setting JSON to false
	I0610 09:40:15.371605    3624 start.go:127] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2385,"bootTime":1686412830,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0610 09:40:15.371691    3624 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:40:15.393321    3624 out.go:177] * [multinode-826000] minikube v1.30.1 on Darwin 13.4
	I0610 09:40:15.414498    3624 notify.go:220] Checking for updates...
	I0610 09:40:15.414528    3624 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:40:15.436366    3624 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:40:15.458240    3624 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 09:40:15.479302    3624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:40:15.500183    3624 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	I0610 09:40:15.521161    3624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:40:15.543075    3624 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:40:15.543262    3624 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:40:15.543911    3624 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:15.543958    3624 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:15.551428    3624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51122
	I0610 09:40:15.551759    3624 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:15.552205    3624 main.go:141] libmachine: Using API Version  1
	I0610 09:40:15.552217    3624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:15.552419    3624 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:15.552539    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:15.580190    3624 out.go:177] * Using the hyperkit driver based on existing profile
	I0610 09:40:15.601268    3624 start.go:297] selected driver: hyperkit
	I0610 09:40:15.601295    3624 start.go:875] validating driver "hyperkit" against &{Name:multinode-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:40:15.601453    3624 start.go:886] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:40:15.601562    3624 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:40:15.601768    3624 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16578-1235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 09:40:15.609882    3624 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0610 09:40:15.613339    3624 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:15.613357    3624 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 09:40:15.615654    3624 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 09:40:15.615686    3624 cni.go:84] Creating CNI manager for ""
	I0610 09:40:15.615696    3624 cni.go:136] 1 nodes found, recommending kindnet
	I0610 09:40:15.615705    3624 start_flags.go:319] config:
	{Name:multinode-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-826000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:40:15.615864    3624 iso.go:125] acquiring lock: {Name:mkc028968ad126cece35ec994c5f11699b30bc34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:40:15.659044    3624 out.go:177] * Starting control plane node multinode-826000 in cluster multinode-826000
	I0610 09:40:15.680035    3624 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:40:15.680125    3624 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0610 09:40:15.680157    3624 cache.go:57] Caching tarball of preloaded images
	I0610 09:40:15.680363    3624 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 09:40:15.680387    3624 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:40:15.680546    3624 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/config.json ...
	I0610 09:40:15.681355    3624 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:40:15.681404    3624 start.go:364] acquiring machines lock for multinode-826000: {Name:mk73e5861e2a32aaad6eda5ce405a92c74d96949 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:40:15.681519    3624 start.go:368] acquired machines lock for "multinode-826000" in 97.869µs
	I0610 09:40:15.681554    3624 start.go:96] Skipping create...Using existing machine configuration
	I0610 09:40:15.681568    3624 fix.go:55] fixHost starting: 
	I0610 09:40:15.682020    3624 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:15.682057    3624 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:15.689464    3624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51124
	I0610 09:40:15.689806    3624 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:15.690161    3624 main.go:141] libmachine: Using API Version  1
	I0610 09:40:15.690173    3624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:15.690417    3624 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:15.690530    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:15.690637    3624 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:40:15.690722    3624 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:40:15.690789    3624 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3484
	I0610 09:40:15.691673    3624 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid 3484 missing from process table
	I0610 09:40:15.691701    3624 fix.go:103] recreateIfNeeded on multinode-826000: state=Stopped err=<nil>
	I0610 09:40:15.691720    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	W0610 09:40:15.691803    3624 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 09:40:15.713156    3624 out.go:177] * Restarting existing hyperkit VM for "multinode-826000" ...
	I0610 09:40:15.755211    3624 main.go:141] libmachine: (multinode-826000) Calling .Start
	I0610 09:40:15.755525    3624 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:40:15.755607    3624 main.go:141] libmachine: (multinode-826000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid
	I0610 09:40:15.757358    3624 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid 3484 missing from process table
	I0610 09:40:15.757377    3624 main.go:141] libmachine: (multinode-826000) DBG | pid 3484 is in state "Stopped"
	I0610 09:40:15.757395    3624 main.go:141] libmachine: (multinode-826000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid...
	I0610 09:40:15.757612    3624 main.go:141] libmachine: (multinode-826000) DBG | Using UUID 39ebe0dc-07ad-11ee-b579-f01898ef957c
	I0610 09:40:15.873305    3624 main.go:141] libmachine: (multinode-826000) DBG | Generated MAC fa:20:3f:84:ae:92
	I0610 09:40:15.873327    3624 main.go:141] libmachine: (multinode-826000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000
	I0610 09:40:15.873460    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"39ebe0dc-07ad-11ee-b579-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003edc20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 09:40:15.873492    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"39ebe0dc-07ad-11ee-b579-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003edc20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 09:40:15.873559    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "39ebe0dc-07ad-11ee-b579-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/multinode-826000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/tty,log=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage,/Users/jenkins/minikube-integration/1657
8-1235/.minikube/machines/multinode-826000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000"}
	I0610 09:40:15.873602    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 39ebe0dc-07ad-11ee-b579-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/multinode-826000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/tty,log=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/console-ring -f kexec,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000"
	I0610 09:40:15.873613    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 09:40:15.874916    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 DEBUG: hyperkit: Pid is 3636
	I0610 09:40:15.875253    3624 main.go:141] libmachine: (multinode-826000) DBG | Attempt 0
	I0610 09:40:15.875265    3624 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:40:15.875321    3624 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3636
	I0610 09:40:15.877114    3624 main.go:141] libmachine: (multinode-826000) DBG | Searching for fa:20:3f:84:ae:92 in /var/db/dhcpd_leases ...
	I0610 09:40:15.877165    3624 main.go:141] libmachine: (multinode-826000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0610 09:40:15.877182    3624 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:fa:20:3f:84:ae:92 ID:1,fa:20:3f:84:ae:92 Lease:0x6485f88c}
	I0610 09:40:15.877198    3624 main.go:141] libmachine: (multinode-826000) DBG | Found match: fa:20:3f:84:ae:92
	I0610 09:40:15.877210    3624 main.go:141] libmachine: (multinode-826000) DBG | IP: 192.168.64.12
	I0610 09:40:15.877261    3624 main.go:141] libmachine: (multinode-826000) Calling .GetConfigRaw
	I0610 09:40:15.877857    3624 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:40:15.878060    3624 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/config.json ...
	I0610 09:40:15.878481    3624 machine.go:88] provisioning docker machine ...
	I0610 09:40:15.878491    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:15.878598    3624 main.go:141] libmachine: (multinode-826000) Calling .GetMachineName
	I0610 09:40:15.878709    3624 buildroot.go:166] provisioning hostname "multinode-826000"
	I0610 09:40:15.878722    3624 main.go:141] libmachine: (multinode-826000) Calling .GetMachineName
	I0610 09:40:15.878809    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:15.878895    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:15.878974    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:15.879073    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:15.879152    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:15.879278    3624 main.go:141] libmachine: Using SSH client type: native
	I0610 09:40:15.879641    3624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:40:15.879651    3624 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-826000 && echo "multinode-826000" | sudo tee /etc/hostname
	I0610 09:40:15.881948    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 09:40:15.936978    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 09:40:15.937870    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 09:40:15.937903    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 09:40:15.937920    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 09:40:15.937948    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 09:40:16.294243    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 09:40:16.294259    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 09:40:16.398389    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 09:40:16.398416    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 09:40:16.398462    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 09:40:16.398525    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 09:40:16.399297    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 09:40:16.399309    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 09:40:20.888139    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 09:40:20.888226    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 09:40:20.888235    3624 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:40:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 09:40:26.971207    3624 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-826000
	
	I0610 09:40:26.971227    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:26.971389    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:26.971481    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:26.971563    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:26.971670    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:26.971809    3624 main.go:141] libmachine: Using SSH client type: native
	I0610 09:40:26.972114    3624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:40:26.972126    3624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-826000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-826000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-826000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:40:27.046524    3624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:40:27.046545    3624 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1235/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1235/.minikube}
	I0610 09:40:27.046568    3624 buildroot.go:174] setting up certificates
	I0610 09:40:27.046578    3624 provision.go:83] configureAuth start
	I0610 09:40:27.046586    3624 main.go:141] libmachine: (multinode-826000) Calling .GetMachineName
	I0610 09:40:27.046732    3624 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:40:27.046816    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:27.046902    3624 provision.go:138] copyHostCerts
	I0610 09:40:27.046943    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem
	I0610 09:40:27.046991    3624 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem, removing ...
	I0610 09:40:27.046999    3624 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem
	I0610 09:40:27.047133    3624 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem (1078 bytes)
	I0610 09:40:27.047323    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem
	I0610 09:40:27.047354    3624 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem, removing ...
	I0610 09:40:27.047358    3624 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem
	I0610 09:40:27.047440    3624 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem (1123 bytes)
	I0610 09:40:27.047651    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem
	I0610 09:40:27.047688    3624 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem, removing ...
	I0610 09:40:27.047692    3624 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem
	I0610 09:40:27.047761    3624 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem (1679 bytes)
	I0610 09:40:27.047902    3624 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem org=jenkins.multinode-826000 san=[192.168.64.12 192.168.64.12 localhost 127.0.0.1 minikube multinode-826000]
	I0610 09:40:27.247387    3624 provision.go:172] copyRemoteCerts
	I0610 09:40:27.247443    3624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:40:27.247485    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:27.247761    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:27.247859    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:27.248001    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:27.248130    3624 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:40:27.289551    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 09:40:27.289624    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0610 09:40:27.305078    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 09:40:27.305135    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:40:27.320486    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 09:40:27.320541    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:40:27.336023    3624 provision.go:86] duration metric: configureAuth took 289.435589ms
	I0610 09:40:27.336034    3624 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:40:27.336154    3624 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:40:27.336166    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:27.336302    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:27.336404    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:27.336508    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:27.336610    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:27.336700    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:27.336829    3624 main.go:141] libmachine: Using SSH client type: native
	I0610 09:40:27.337127    3624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:40:27.337135    3624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:40:27.404348    3624 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:40:27.404360    3624 buildroot.go:70] root file system type: tmpfs
	I0610 09:40:27.404429    3624 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:40:27.404445    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:27.404572    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:27.404673    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:27.404769    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:27.404861    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:27.404986    3624 main.go:141] libmachine: Using SSH client type: native
	I0610 09:40:27.405282    3624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:40:27.405328    3624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:40:27.482212    3624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:40:27.482239    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:27.482370    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:27.482464    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:27.482555    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:27.482643    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:27.482776    3624 main.go:141] libmachine: Using SSH client type: native
	I0610 09:40:27.483083    3624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:40:27.483095    3624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:40:28.007304    3624 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:40:28.007317    3624 machine.go:91] provisioned docker machine in 12.128871759s
	I0610 09:40:28.007328    3624 start.go:300] post-start starting for "multinode-826000" (driver="hyperkit")
	I0610 09:40:28.007338    3624 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:40:28.007350    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:28.007539    3624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:40:28.007551    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:28.007651    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:28.007748    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:28.007848    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:28.007946    3624 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:40:28.050570    3624 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:40:28.053016    3624 command_runner.go:130] > NAME=Buildroot
	I0610 09:40:28.053023    3624 command_runner.go:130] > VERSION=2021.02.12-1-ge0c6143-dirty
	I0610 09:40:28.053028    3624 command_runner.go:130] > ID=buildroot
	I0610 09:40:28.053032    3624 command_runner.go:130] > VERSION_ID=2021.02.12
	I0610 09:40:28.053036    3624 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0610 09:40:28.053177    3624 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:40:28.053189    3624 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1235/.minikube/addons for local assets ...
	I0610 09:40:28.053267    3624 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1235/.minikube/files for local assets ...
	I0610 09:40:28.053412    3624 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem -> 16822.pem in /etc/ssl/certs
	I0610 09:40:28.053419    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem -> /etc/ssl/certs/16822.pem
	I0610 09:40:28.053570    3624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 09:40:28.059735    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem --> /etc/ssl/certs/16822.pem (1708 bytes)
	I0610 09:40:28.075144    3624 start.go:303] post-start completed in 67.807797ms
	I0610 09:40:28.075158    3624 fix.go:57] fixHost completed within 12.393638461s
	I0610 09:40:28.075175    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:28.075316    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:28.075426    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:28.075516    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:28.075604    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:28.075731    3624 main.go:141] libmachine: Using SSH client type: native
	I0610 09:40:28.076033    3624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:40:28.076043    3624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 09:40:28.146045    3624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686415227.938191871
	
	I0610 09:40:28.146058    3624 fix.go:207] guest clock: 1686415227.938191871
	I0610 09:40:28.146064    3624 fix.go:220] Guest: 2023-06-10 09:40:27.938191871 -0700 PDT Remote: 2023-06-10 09:40:28.07516 -0700 PDT m=+12.757794261 (delta=-136.968129ms)
	I0610 09:40:28.146086    3624 fix.go:191] guest clock delta is within tolerance: -136.968129ms
	I0610 09:40:28.146090    3624 start.go:83] releasing machines lock for "multinode-826000", held for 12.464605198s
	I0610 09:40:28.146108    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:28.146242    3624 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:40:28.146360    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:28.146690    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:28.146781    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:28.146868    3624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:40:28.146904    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:28.146939    3624 ssh_runner.go:195] Run: cat /version.json
	I0610 09:40:28.146951    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:28.147007    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:28.147056    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:28.147138    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:28.147155    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:28.147272    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:28.147288    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:28.147372    3624 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:40:28.147389    3624 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:40:28.184297    3624 command_runner.go:130] > {"iso_version": "v1.30.1-1686096373-16019", "kicbase_version": "v0.0.39-1686006988-16632", "minikube_version": "v1.30.1", "commit": "25a6e24452a99fbf54228d85990beeaaccbd5c35"}
	I0610 09:40:28.184527    3624 ssh_runner.go:195] Run: systemctl --version
	I0610 09:40:28.232085    3624 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 09:40:28.233060    3624 command_runner.go:130] > systemd 247 (247)
	I0610 09:40:28.233082    3624 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0610 09:40:28.233234    3624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 09:40:28.237390    3624 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 09:40:28.237445    3624 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:40:28.237496    3624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 09:40:28.247658    3624 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 09:40:28.247685    3624 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:40:28.247694    3624 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:40:28.247778    3624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:40:28.262674    3624 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0610 09:40:28.262696    3624 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0610 09:40:28.262707    3624 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0610 09:40:28.262712    3624 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0610 09:40:28.262716    3624 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0610 09:40:28.262720    3624 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0610 09:40:28.262724    3624 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 09:40:28.262728    3624 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:40:28.263520    3624 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:40:28.263533    3624 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:40:28.263540    3624 start.go:481] detecting cgroup driver to use...
	I0610 09:40:28.263638    3624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:40:28.275552    3624 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 09:40:28.275884    3624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 09:40:28.282964    3624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:40:28.289901    3624 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:40:28.289942    3624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:40:28.296228    3624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:40:28.303306    3624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:40:28.310345    3624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:40:28.317563    3624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:40:28.325422    3624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:40:28.332476    3624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:40:28.338530    3624 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 09:40:28.338711    3624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:40:28.345059    3624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:40:28.431059    3624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:40:28.442895    3624 start.go:481] detecting cgroup driver to use...
	I0610 09:40:28.442970    3624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:40:28.452142    3624 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 09:40:28.452670    3624 command_runner.go:130] > [Unit]
	I0610 09:40:28.452679    3624 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 09:40:28.452683    3624 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 09:40:28.452690    3624 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 09:40:28.452695    3624 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 09:40:28.452699    3624 command_runner.go:130] > StartLimitBurst=3
	I0610 09:40:28.452703    3624 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 09:40:28.452706    3624 command_runner.go:130] > [Service]
	I0610 09:40:28.452710    3624 command_runner.go:130] > Type=notify
	I0610 09:40:28.452713    3624 command_runner.go:130] > Restart=on-failure
	I0610 09:40:28.452723    3624 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 09:40:28.452729    3624 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 09:40:28.452735    3624 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 09:40:28.452740    3624 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 09:40:28.452746    3624 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 09:40:28.452751    3624 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 09:40:28.452758    3624 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 09:40:28.452766    3624 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 09:40:28.452771    3624 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 09:40:28.452774    3624 command_runner.go:130] > ExecStart=
	I0610 09:40:28.452785    3624 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 09:40:28.452795    3624 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 09:40:28.452802    3624 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 09:40:28.452807    3624 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 09:40:28.452811    3624 command_runner.go:130] > LimitNOFILE=infinity
	I0610 09:40:28.452815    3624 command_runner.go:130] > LimitNPROC=infinity
	I0610 09:40:28.452819    3624 command_runner.go:130] > LimitCORE=infinity
	I0610 09:40:28.452824    3624 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 09:40:28.452828    3624 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 09:40:28.452831    3624 command_runner.go:130] > TasksMax=infinity
	I0610 09:40:28.452835    3624 command_runner.go:130] > TimeoutStartSec=0
	I0610 09:40:28.452840    3624 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 09:40:28.452843    3624 command_runner.go:130] > Delegate=yes
	I0610 09:40:28.452847    3624 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 09:40:28.452850    3624 command_runner.go:130] > KillMode=process
	I0610 09:40:28.452854    3624 command_runner.go:130] > [Install]
	I0610 09:40:28.452862    3624 command_runner.go:130] > WantedBy=multi-user.target
	I0610 09:40:28.452968    3624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:40:28.463161    3624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:40:28.475844    3624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:40:28.484993    3624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:40:28.493653    3624 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:40:28.527920    3624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:40:28.537133    3624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:40:28.548847    3624 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 09:40:28.549123    3624 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:40:28.551222    3624 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 09:40:28.551484    3624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:40:28.557699    3624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:40:28.569580    3624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:40:28.650580    3624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:40:28.741638    3624 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:40:28.741655    3624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:40:28.753611    3624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:40:28.835261    3624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:40:30.164925    3624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.32964924s)
	I0610 09:40:30.164986    3624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:40:30.246288    3624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 09:40:30.335523    3624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:40:30.428527    3624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:40:30.520265    3624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 09:40:30.536499    3624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:40:30.630773    3624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 09:40:30.685649    3624 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 09:40:30.685743    3624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 09:40:30.689488    3624 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 09:40:30.689500    3624 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 09:40:30.689505    3624 command_runner.go:130] > Device: 16h/22d	Inode: 841         Links: 1
	I0610 09:40:30.689511    3624 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 09:40:30.689519    3624 command_runner.go:130] > Access: 2023-06-10 16:40:30.501203266 +0000
	I0610 09:40:30.689539    3624 command_runner.go:130] > Modify: 2023-06-10 16:40:30.501203266 +0000
	I0610 09:40:30.689545    3624 command_runner.go:130] > Change: 2023-06-10 16:40:30.504271014 +0000
	I0610 09:40:30.689549    3624 command_runner.go:130] >  Birth: -
	I0610 09:40:30.689686    3624 start.go:549] Will wait 60s for crictl version
	I0610 09:40:30.689731    3624 ssh_runner.go:195] Run: which crictl
	I0610 09:40:30.691990    3624 command_runner.go:130] > /usr/bin/crictl
	I0610 09:40:30.692225    3624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 09:40:30.722392    3624 command_runner.go:130] > Version:  0.1.0
	I0610 09:40:30.722416    3624 command_runner.go:130] > RuntimeName:  docker
	I0610 09:40:30.722421    3624 command_runner.go:130] > RuntimeVersion:  24.0.2
	I0610 09:40:30.722425    3624 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0610 09:40:30.722440    3624 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 09:40:30.722518    3624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:40:30.738678    3624 command_runner.go:130] > 24.0.2
	I0610 09:40:30.739299    3624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:40:30.755906    3624 command_runner.go:130] > 24.0.2
	I0610 09:40:30.778991    3624 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 09:40:30.779062    3624 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:40:30.779564    3624 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0610 09:40:30.783577    3624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:40:30.791381    3624 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:40:30.791465    3624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:40:30.805217    3624 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0610 09:40:30.805230    3624 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0610 09:40:30.805234    3624 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0610 09:40:30.805242    3624 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0610 09:40:30.805246    3624 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0610 09:40:30.805250    3624 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0610 09:40:30.805253    3624 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 09:40:30.805259    3624 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:40:30.805857    3624 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:40:30.805866    3624 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:40:30.805934    3624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:40:30.818821    3624 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0610 09:40:30.818831    3624 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0610 09:40:30.818836    3624 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0610 09:40:30.818840    3624 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0610 09:40:30.818843    3624 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0610 09:40:30.818847    3624 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0610 09:40:30.818852    3624 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 09:40:30.818858    3624 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:40:30.818938    3624 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:40:30.818955    3624 cache_images.go:84] Images are preloaded, skipping loading
	I0610 09:40:30.819022    3624 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 09:40:30.835729    3624 command_runner.go:130] > cgroupfs
	I0610 09:40:30.836254    3624 cni.go:84] Creating CNI manager for ""
	I0610 09:40:30.836264    3624 cni.go:136] 1 nodes found, recommending kindnet
	I0610 09:40:30.836280    3624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 09:40:30.836295    3624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.12 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-826000 NodeName:multinode-826000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 09:40:30.836381    3624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-826000"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 09:40:30.836454    3624 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-826000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 09:40:30.836517    3624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 09:40:30.843311    3624 command_runner.go:130] > kubeadm
	I0610 09:40:30.843320    3624 command_runner.go:130] > kubectl
	I0610 09:40:30.843324    3624 command_runner.go:130] > kubelet
	I0610 09:40:30.843338    3624 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 09:40:30.843392    3624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 09:40:30.849651    3624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0610 09:40:30.860484    3624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 09:40:30.872128    3624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0610 09:40:30.883398    3624 ssh_runner.go:195] Run: grep 192.168.64.12	control-plane.minikube.internal$ /etc/hosts
	I0610 09:40:30.885742    3624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:40:30.893169    3624 certs.go:56] Setting up /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000 for IP: 192.168.64.12
	I0610 09:40:30.893184    3624 certs.go:190] acquiring lock for shared ca certs: {Name:mk1e521581ce58a8d2ad5f887c3da11f6a7a0530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:40:30.893357    3624 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.key
	I0610 09:40:30.893414    3624 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.key
	I0610 09:40:30.893461    3624 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.key
	I0610 09:40:30.893475    3624 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.crt with IP's: []
	I0610 09:40:31.004710    3624 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.crt ...
	I0610 09:40:31.004725    3624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.crt: {Name:mkd7feceb498c30dbdde4e0f18fe114351ced1da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:40:31.005033    3624 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.key ...
	I0610 09:40:31.005040    3624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.key: {Name:mka5990817349259cde5f0ccffab5b4141665db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:40:31.005226    3624 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.key.546ed142
	I0610 09:40:31.005238    3624 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.crt.546ed142 with IP's: [192.168.64.12 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 09:40:31.168146    3624 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.crt.546ed142 ...
	I0610 09:40:31.168161    3624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.crt.546ed142: {Name:mk9366f63d86ab5997da39b0265b19620376e0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:40:31.168443    3624 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.key.546ed142 ...
	I0610 09:40:31.168452    3624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.key.546ed142: {Name:mka3c13093aa10637256880ca3dac51d3d72d2ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:40:31.168638    3624 certs.go:337] copying /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.crt.546ed142 -> /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.crt
	I0610 09:40:31.168790    3624 certs.go:341] copying /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.key.546ed142 -> /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.key
	I0610 09:40:31.168942    3624 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.key
	I0610 09:40:31.168959    3624 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.crt with IP's: []
	I0610 09:40:31.270562    3624 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.crt ...
	I0610 09:40:31.270578    3624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.crt: {Name:mke3e5ba6b969c13ab53b0f5e637d49a383ea126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:40:31.270848    3624 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.key ...
	I0610 09:40:31.270856    3624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.key: {Name:mkdacfb24ba770d6e7bde72a915a6de6cbd77503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:40:31.271044    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 09:40:31.271074    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 09:40:31.271098    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 09:40:31.271118    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 09:40:31.271141    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 09:40:31.271160    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 09:40:31.271177    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 09:40:31.271195    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 09:40:31.271287    3624 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682.pem (1338 bytes)
	W0610 09:40:31.271332    3624 certs.go:433] ignoring /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682_empty.pem, impossibly tiny 0 bytes
	I0610 09:40:31.271343    3624 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 09:40:31.271377    3624 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem (1078 bytes)
	I0610 09:40:31.271406    3624 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem (1123 bytes)
	I0610 09:40:31.271433    3624 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem (1679 bytes)
	I0610 09:40:31.271499    3624 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem (1708 bytes)
	I0610 09:40:31.271528    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:40:31.271547    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682.pem -> /usr/share/ca-certificates/1682.pem
	I0610 09:40:31.271565    3624 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem -> /usr/share/ca-certificates/16822.pem
	I0610 09:40:31.271989    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 09:40:31.288711    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 09:40:31.304786    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 09:40:31.321683    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 09:40:31.337828    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 09:40:31.353717    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 09:40:31.370755    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 09:40:31.387123    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 09:40:31.403445    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 09:40:31.419625    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682.pem --> /usr/share/ca-certificates/1682.pem (1338 bytes)
	I0610 09:40:31.436524    3624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem --> /usr/share/ca-certificates/16822.pem (1708 bytes)
	I0610 09:40:31.452545    3624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 09:40:31.463640    3624 ssh_runner.go:195] Run: openssl version
	I0610 09:40:31.467015    3624 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0610 09:40:31.467144    3624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 09:40:31.473592    3624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:40:31.476481    3624 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:40:31.476682    3624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:40:31.476743    3624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:40:31.480172    3624 command_runner.go:130] > b5213941
	I0610 09:40:31.480402    3624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 09:40:31.486839    3624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1682.pem && ln -fs /usr/share/ca-certificates/1682.pem /etc/ssl/certs/1682.pem"
	I0610 09:40:31.494036    3624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1682.pem
	I0610 09:40:31.497092    3624 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 16:27 /usr/share/ca-certificates/1682.pem
	I0610 09:40:31.497150    3624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 16:27 /usr/share/ca-certificates/1682.pem
	I0610 09:40:31.497190    3624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1682.pem
	I0610 09:40:31.500643    3624 command_runner.go:130] > 51391683
	I0610 09:40:31.500796    3624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1682.pem /etc/ssl/certs/51391683.0"
	I0610 09:40:31.507218    3624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16822.pem && ln -fs /usr/share/ca-certificates/16822.pem /etc/ssl/certs/16822.pem"
	I0610 09:40:31.513754    3624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16822.pem
	I0610 09:40:31.516652    3624 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 16:27 /usr/share/ca-certificates/16822.pem
	I0610 09:40:31.516821    3624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 16:27 /usr/share/ca-certificates/16822.pem
	I0610 09:40:31.516856    3624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16822.pem
	I0610 09:40:31.520274    3624 command_runner.go:130] > 3ec20f2e
	I0610 09:40:31.520381    3624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16822.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 09:40:31.526950    3624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 09:40:31.529491    3624 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 09:40:31.529573    3624 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 09:40:31.529613    3624 kubeadm.go:404] StartCluster: {Name:multinode-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.27.2 ClusterName:multinode-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:40:31.529695    3624 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:40:31.542158    3624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 09:40:31.548303    3624 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0610 09:40:31.548314    3624 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0610 09:40:31.548320    3624 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0610 09:40:31.548376    3624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 09:40:31.554480    3624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 09:40:31.561664    3624 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 09:40:31.561674    3624 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 09:40:31.561680    3624 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 09:40:31.561686    3624 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:40:31.561774    3624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:40:31.561807    3624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 09:40:31.620954    3624 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 09:40:31.620970    3624 command_runner.go:130] > [init] Using Kubernetes version: v1.27.2
	I0610 09:40:31.621045    3624 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 09:40:31.621052    3624 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 09:40:31.757960    3624 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 09:40:31.757973    3624 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 09:40:31.758049    3624 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 09:40:31.758055    3624 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 09:40:31.758125    3624 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 09:40:31.758133    3624 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 09:40:31.891663    3624 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:40:31.891678    3624 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:40:31.966244    3624 out.go:204]   - Generating certificates and keys ...
	I0610 09:40:31.966327    3624 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 09:40:31.966337    3624 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 09:40:31.966386    3624 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 09:40:31.966395    3624 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 09:40:32.001311    3624 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 09:40:32.001318    3624 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 09:40:32.143336    3624 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 09:40:32.143349    3624 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0610 09:40:32.564507    3624 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 09:40:32.564521    3624 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0610 09:40:32.719788    3624 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 09:40:32.719804    3624 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0610 09:40:32.841785    3624 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 09:40:32.841803    3624 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0610 09:40:32.841935    3624 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-826000] and IPs [192.168.64.12 127.0.0.1 ::1]
	I0610 09:40:32.841943    3624 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-826000] and IPs [192.168.64.12 127.0.0.1 ::1]
	I0610 09:40:33.004449    3624 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 09:40:33.004460    3624 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0610 09:40:33.004632    3624 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-826000] and IPs [192.168.64.12 127.0.0.1 ::1]
	I0610 09:40:33.004641    3624 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-826000] and IPs [192.168.64.12 127.0.0.1 ::1]
	I0610 09:40:33.247374    3624 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 09:40:33.247382    3624 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 09:40:33.727266    3624 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 09:40:33.727275    3624 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 09:40:34.104889    3624 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 09:40:34.104892    3624 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0610 09:40:34.104950    3624 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:40:34.104957    3624 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:40:34.234984    3624 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:40:34.234999    3624 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:40:34.404321    3624 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:40:34.404339    3624 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:40:34.647144    3624 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:40:34.647152    3624 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:40:34.946041    3624 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:40:34.946048    3624 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:40:34.956790    3624 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:40:34.956789    3624 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:40:34.957527    3624 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:40:34.957529    3624 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:40:34.957573    3624 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 09:40:34.957580    3624 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 09:40:35.049828    3624 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:40:35.049832    3624 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:40:35.077275    3624 out.go:204]   - Booting up control plane ...
	I0610 09:40:35.077372    3624 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:40:35.077388    3624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:40:35.077473    3624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:40:35.077479    3624 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:40:35.077539    3624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:40:35.077551    3624 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:40:35.077629    3624 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:40:35.077639    3624 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:40:35.077763    3624 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 09:40:35.077771    3624 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 09:40:41.509596    3624 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.508023 seconds
	I0610 09:40:41.509613    3624 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.508023 seconds
	I0610 09:40:41.509780    3624 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 09:40:41.509789    3624 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 09:40:41.520803    3624 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 09:40:41.520810    3624 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 09:40:42.034681    3624 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 09:40:42.034697    3624 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0610 09:40:42.034867    3624 kubeadm.go:322] [mark-control-plane] Marking the node multinode-826000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 09:40:42.034875    3624 command_runner.go:130] > [mark-control-plane] Marking the node multinode-826000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 09:40:42.545173    3624 kubeadm.go:322] [bootstrap-token] Using token: heneoq.urfi27s5e406gglc
	I0610 09:40:42.545187    3624 command_runner.go:130] > [bootstrap-token] Using token: heneoq.urfi27s5e406gglc
	I0610 09:40:42.570520    3624 out.go:204]   - Configuring RBAC rules ...
	I0610 09:40:42.570617    3624 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 09:40:42.570630    3624 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 09:40:42.574146    3624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 09:40:42.574158    3624 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 09:40:42.579732    3624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 09:40:42.579743    3624 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 09:40:42.582039    3624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 09:40:42.582052    3624 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 09:40:42.585232    3624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 09:40:42.585244    3624 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 09:40:42.593846    3624 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 09:40:42.593866    3624 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 09:40:42.600426    3624 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 09:40:42.600438    3624 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 09:40:42.787056    3624 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 09:40:42.787077    3624 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 09:40:42.977280    3624 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 09:40:42.977292    3624 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 09:40:42.978151    3624 kubeadm.go:322] 
	I0610 09:40:42.978204    3624 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 09:40:42.978210    3624 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0610 09:40:42.978215    3624 kubeadm.go:322] 
	I0610 09:40:42.978287    3624 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 09:40:42.978298    3624 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0610 09:40:42.978307    3624 kubeadm.go:322] 
	I0610 09:40:42.978327    3624 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0610 09:40:42.978331    3624 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 09:40:42.978399    3624 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 09:40:42.978403    3624 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 09:40:42.978445    3624 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 09:40:42.978455    3624 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 09:40:42.978464    3624 kubeadm.go:322] 
	I0610 09:40:42.978502    3624 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 09:40:42.978510    3624 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0610 09:40:42.978527    3624 kubeadm.go:322] 
	I0610 09:40:42.978571    3624 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 09:40:42.978575    3624 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 09:40:42.978581    3624 kubeadm.go:322] 
	I0610 09:40:42.978622    3624 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 09:40:42.978630    3624 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0610 09:40:42.978689    3624 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 09:40:42.978696    3624 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 09:40:42.978771    3624 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 09:40:42.978778    3624 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 09:40:42.978781    3624 kubeadm.go:322] 
	I0610 09:40:42.978868    3624 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 09:40:42.978879    3624 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0610 09:40:42.978949    3624 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 09:40:42.978955    3624 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0610 09:40:42.978958    3624 kubeadm.go:322] 
	I0610 09:40:42.979028    3624 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token heneoq.urfi27s5e406gglc \
	I0610 09:40:42.979035    3624 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token heneoq.urfi27s5e406gglc \
	I0610 09:40:42.979112    3624 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:25bbecbd97dc6f81e6fad59f59c7cfd513bc3a28642154b16be7e48c15e587d7 \
	I0610 09:40:42.979116    3624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bbecbd97dc6f81e6fad59f59c7cfd513bc3a28642154b16be7e48c15e587d7 \
	I0610 09:40:42.979130    3624 command_runner.go:130] > 	--control-plane 
	I0610 09:40:42.979134    3624 kubeadm.go:322] 	--control-plane 
	I0610 09:40:42.979139    3624 kubeadm.go:322] 
	I0610 09:40:42.979209    3624 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0610 09:40:42.979214    3624 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 09:40:42.979217    3624 kubeadm.go:322] 
	I0610 09:40:42.979288    3624 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token heneoq.urfi27s5e406gglc \
	I0610 09:40:42.979291    3624 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token heneoq.urfi27s5e406gglc \
	I0610 09:40:42.979372    3624 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:25bbecbd97dc6f81e6fad59f59c7cfd513bc3a28642154b16be7e48c15e587d7 
	I0610 09:40:42.979377    3624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bbecbd97dc6f81e6fad59f59c7cfd513bc3a28642154b16be7e48c15e587d7 
	I0610 09:40:42.979903    3624 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 09:40:42.979911    3624 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 09:40:42.980043    3624 kubeadm.go:322] W0610 16:40:31.643416    1246 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:40:42.980049    3624 command_runner.go:130] ! W0610 16:40:31.643416    1246 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:40:42.980168    3624 kubeadm.go:322] W0610 16:40:34.995919    1246 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:40:42.980186    3624 command_runner.go:130] ! W0610 16:40:34.995919    1246 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:40:42.980208    3624 cni.go:84] Creating CNI manager for ""
	I0610 09:40:42.980215    3624 cni.go:136] 1 nodes found, recommending kindnet
	I0610 09:40:43.020233    3624 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 09:40:43.062514    3624 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 09:40:43.067229    3624 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 09:40:43.067241    3624 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0610 09:40:43.067246    3624 command_runner.go:130] > Device: 11h/17d	Inode: 3541        Links: 1
	I0610 09:40:43.067259    3624 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 09:40:43.067264    3624 command_runner.go:130] > Access: 2023-06-10 16:40:24.169612324 +0000
	I0610 09:40:43.067272    3624 command_runner.go:130] > Modify: 2023-06-07 05:33:21.000000000 +0000
	I0610 09:40:43.067276    3624 command_runner.go:130] > Change: 2023-06-10 16:40:22.893612327 +0000
	I0610 09:40:43.067279    3624 command_runner.go:130] >  Birth: -
	I0610 09:40:43.067332    3624 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0610 09:40:43.067342    3624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 09:40:43.096352    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 09:40:43.881459    3624 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0610 09:40:43.885973    3624 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0610 09:40:43.892109    3624 command_runner.go:130] > serviceaccount/kindnet created
	I0610 09:40:43.899641    3624 command_runner.go:130] > daemonset.apps/kindnet created
	I0610 09:40:43.901482    3624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 09:40:43.901555    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=multinode-826000 minikube.k8s.io/updated_at=2023_06_10T09_40_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:43.901567    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:44.100510    3624 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0610 09:40:44.102443    3624 command_runner.go:130] > -16
	I0610 09:40:44.102462    3624 ops.go:34] apiserver oom_adj: -16
	I0610 09:40:44.102507    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:44.102510    3624 command_runner.go:130] > node/multinode-826000 labeled
	I0610 09:40:44.176693    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:44.677491    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:44.745913    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:45.177210    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:45.240444    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:45.677093    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:45.755001    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:46.178629    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:46.247359    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:46.679122    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:46.744137    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:47.177976    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:47.255652    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:47.678055    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:47.751260    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:48.178121    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:48.243907    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:48.677476    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:48.749988    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:49.177187    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:49.248730    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:49.677098    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:49.740658    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:50.177646    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:50.245936    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:50.677885    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:50.744665    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:51.177752    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:51.246741    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:51.677658    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:51.742378    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:52.177402    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:52.237926    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:52.677515    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:52.741563    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:53.178652    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:53.243120    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:53.677694    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:53.739522    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:54.177755    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:54.245574    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:54.679065    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:54.754056    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:55.177639    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:55.236516    3624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 09:40:55.676954    3624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:40:55.793150    3624 command_runner.go:130] > NAME      SECRETS   AGE
	I0610 09:40:55.793186    3624 command_runner.go:130] > default   0         0s
	I0610 09:40:55.793249    3624 kubeadm.go:1076] duration metric: took 11.891793841s to wait for elevateKubeSystemPrivileges.
	I0610 09:40:55.793268    3624 kubeadm.go:406] StartCluster complete in 24.263743579s
	I0610 09:40:55.793280    3624 settings.go:142] acquiring lock: {Name:mkb9b6482d5ac8949a51ff4918d4bb9ad74e8d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:40:55.793370    3624 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:40:55.793821    3624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/kubeconfig: {Name:mk52bc17fccce955e53da0cb42ca8ae2dd34c214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:40:55.794062    3624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 09:40:55.794075    3624 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 09:40:55.794132    3624 addons.go:66] Setting storage-provisioner=true in profile "multinode-826000"
	I0610 09:40:55.794138    3624 addons.go:66] Setting default-storageclass=true in profile "multinode-826000"
	I0610 09:40:55.794144    3624 addons.go:228] Setting addon storage-provisioner=true in "multinode-826000"
	I0610 09:40:55.794154    3624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-826000"
	I0610 09:40:55.794175    3624 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:40:55.794177    3624 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:40:55.794249    3624 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:40:55.794418    3624 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:55.794419    3624 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:55.794434    3624 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:55.794435    3624 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:55.794466    3624 kapi.go:59] client config for multinode-826000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.key", CAFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x257f980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 09:40:55.797510    3624 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 09:40:55.797833    3624 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 09:40:55.797841    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:55.797849    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:55.797854    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:55.802331    3624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51144
	I0610 09:40:55.802570    3624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51146
	I0610 09:40:55.802721    3624 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:55.802849    3624 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:55.803084    3624 main.go:141] libmachine: Using API Version  1
	I0610 09:40:55.803101    3624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:55.803177    3624 main.go:141] libmachine: Using API Version  1
	I0610 09:40:55.803189    3624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:55.803340    3624 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:55.803382    3624 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:55.803484    3624 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:40:55.803571    3624 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:40:55.803647    3624 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3636
	I0610 09:40:55.803688    3624 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:55.803704    3624 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:55.805517    3624 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:40:55.805735    3624 kapi.go:59] client config for multinode-826000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.key", CAFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x257f980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 09:40:55.806004    3624 round_trippers.go:463] GET https://192.168.64.12:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 09:40:55.806015    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:55.806033    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:55.806039    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:55.806269    3624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 09:40:55.806281    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:55.806287    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:55 GMT
	I0610 09:40:55.806292    3624 round_trippers.go:580]     Audit-Id: 9cc528f3-fa40-4d68-8240-f5f00eee5d27
	I0610 09:40:55.806297    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:55.806302    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:55.806307    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:55.806312    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:55.806317    3624 round_trippers.go:580]     Content-Length: 291
	I0610 09:40:55.806345    3624 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"625a723b-e519-4e66-a2da-66daece80ce5","resourceVersion":"343","creationTimestamp":"2023-06-10T16:40:42Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 09:40:55.806666    3624 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"625a723b-e519-4e66-a2da-66daece80ce5","resourceVersion":"343","creationTimestamp":"2023-06-10T16:40:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 09:40:55.806700    3624 round_trippers.go:463] PUT https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 09:40:55.806708    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:55.806714    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:55.806720    3624 round_trippers.go:473]     Content-Type: application/json
	I0610 09:40:55.806725    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:55.808715    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:40:55.808725    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:55.808730    3624 round_trippers.go:580]     Audit-Id: 3e21ce24-9c2f-4a57-ab96-2150859b19bd
	I0610 09:40:55.808735    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:55.808740    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:55.808745    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:55.808751    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:55.808759    3624 round_trippers.go:580]     Content-Length: 109
	I0610 09:40:55.808765    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:55 GMT
	I0610 09:40:55.808780    3624 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"344"},"items":[]}
	I0610 09:40:55.808949    3624 addons.go:228] Setting addon default-storageclass=true in "multinode-826000"
	I0610 09:40:55.808976    3624 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:40:55.809250    3624 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:55.809270    3624 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:55.811384    3624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51148
	I0610 09:40:55.811463    3624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 09:40:55.812023    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:55.812044    3624 round_trippers.go:580]     Audit-Id: 53aa6915-2bff-477b-9dc8-f1aa12f60cf1
	I0610 09:40:55.812054    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:55.812063    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:55.812071    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:55.812080    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:55.812095    3624 round_trippers.go:580]     Content-Length: 291
	I0610 09:40:55.812105    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:55 GMT
	I0610 09:40:55.812165    3624 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"625a723b-e519-4e66-a2da-66daece80ce5","resourceVersion":"345","creationTimestamp":"2023-06-10T16:40:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 09:40:55.812271    3624 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:55.813111    3624 main.go:141] libmachine: Using API Version  1
	I0610 09:40:55.813128    3624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:55.813442    3624 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:55.813552    3624 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:40:55.813638    3624 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:40:55.813713    3624 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3636
	I0610 09:40:55.814639    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:55.816923    3624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51150
	I0610 09:40:55.853028    3624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:40:55.853494    3624 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:55.873166    3624 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:40:55.873179    3624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 09:40:55.873194    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:55.873335    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:55.873437    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:55.873534    3624 main.go:141] libmachine: Using API Version  1
	I0610 09:40:55.873549    3624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:55.873555    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:55.873674    3624 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:40:55.873773    3624 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:55.874139    3624 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:40:55.874153    3624 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:40:55.881426    3624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51153
	I0610 09:40:55.881791    3624 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:40:55.882138    3624 main.go:141] libmachine: Using API Version  1
	I0610 09:40:55.882148    3624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:40:55.882417    3624 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:40:55.882536    3624 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:40:55.882625    3624 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:40:55.882712    3624 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3636
	I0610 09:40:55.883640    3624 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:40:55.883829    3624 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 09:40:55.883837    3624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 09:40:55.883845    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:40:55.883921    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:40:55.884032    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:40:55.884108    3624 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:40:55.884177    3624 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:40:55.929938    3624 command_runner.go:130] > apiVersion: v1
	I0610 09:40:55.929948    3624 command_runner.go:130] > data:
	I0610 09:40:55.929952    3624 command_runner.go:130] >   Corefile: |
	I0610 09:40:55.929955    3624 command_runner.go:130] >     .:53 {
	I0610 09:40:55.929959    3624 command_runner.go:130] >         errors
	I0610 09:40:55.929964    3624 command_runner.go:130] >         health {
	I0610 09:40:55.929968    3624 command_runner.go:130] >            lameduck 5s
	I0610 09:40:55.929971    3624 command_runner.go:130] >         }
	I0610 09:40:55.929974    3624 command_runner.go:130] >         ready
	I0610 09:40:55.929979    3624 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0610 09:40:55.929990    3624 command_runner.go:130] >            pods insecure
	I0610 09:40:55.929995    3624 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0610 09:40:55.930001    3624 command_runner.go:130] >            ttl 30
	I0610 09:40:55.930012    3624 command_runner.go:130] >         }
	I0610 09:40:55.930019    3624 command_runner.go:130] >         prometheus :9153
	I0610 09:40:55.930024    3624 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0610 09:40:55.930027    3624 command_runner.go:130] >            max_concurrent 1000
	I0610 09:40:55.930042    3624 command_runner.go:130] >         }
	I0610 09:40:55.930049    3624 command_runner.go:130] >         cache 30
	I0610 09:40:55.930053    3624 command_runner.go:130] >         loop
	I0610 09:40:55.930056    3624 command_runner.go:130] >         reload
	I0610 09:40:55.930060    3624 command_runner.go:130] >         loadbalance
	I0610 09:40:55.930063    3624 command_runner.go:130] >     }
	I0610 09:40:55.930067    3624 command_runner.go:130] > kind: ConfigMap
	I0610 09:40:55.930070    3624 command_runner.go:130] > metadata:
	I0610 09:40:55.930080    3624 command_runner.go:130] >   creationTimestamp: "2023-06-10T16:40:42Z"
	I0610 09:40:55.930088    3624 command_runner.go:130] >   name: coredns
	I0610 09:40:55.930092    3624 command_runner.go:130] >   namespace: kube-system
	I0610 09:40:55.930096    3624 command_runner.go:130] >   resourceVersion: "232"
	I0610 09:40:55.930100    3624 command_runner.go:130] >   uid: 7a9d9c8f-b950-4b01-817d-0d3621e11e25
	I0610 09:40:55.930901    3624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.64.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 09:40:56.023857    3624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 09:40:56.040808    3624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:40:56.312467    3624 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 09:40:56.312478    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:56.312485    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:56.312493    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:56.314337    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:40:56.314345    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:56.314351    3624 round_trippers.go:580]     Audit-Id: e8af9108-9e48-463e-96c6-61f268299fc7
	I0610 09:40:56.314356    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:56.314361    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:56.314366    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:56.314371    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:56.314376    3624 round_trippers.go:580]     Content-Length: 291
	I0610 09:40:56.314381    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:56 GMT
	I0610 09:40:56.314395    3624 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"625a723b-e519-4e66-a2da-66daece80ce5","resourceVersion":"355","creationTimestamp":"2023-06-10T16:40:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0610 09:40:56.314452    3624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-826000" context rescaled to 1 replicas
	I0610 09:40:56.314471    3624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:40:56.335308    3624 out.go:177] * Verifying Kubernetes components...
	I0610 09:40:56.377064    3624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:40:56.454631    3624 command_runner.go:130] > configmap/coredns replaced
	I0610 09:40:56.463782    3624 start.go:916] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS's ConfigMap
	I0610 09:40:56.480815    3624 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0610 09:40:56.483144    3624 main.go:141] libmachine: Making call to close driver server
	I0610 09:40:56.483156    3624 main.go:141] libmachine: (multinode-826000) Calling .Close
	I0610 09:40:56.483317    3624 main.go:141] libmachine: (multinode-826000) DBG | Closing plugin on server side
	I0610 09:40:56.483344    3624 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:40:56.483353    3624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:40:56.483363    3624 main.go:141] libmachine: Making call to close driver server
	I0610 09:40:56.483372    3624 main.go:141] libmachine: (multinode-826000) Calling .Close
	I0610 09:40:56.483473    3624 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:40:56.483484    3624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:40:56.483495    3624 main.go:141] libmachine: (multinode-826000) DBG | Closing plugin on server side
	I0610 09:40:56.483498    3624 main.go:141] libmachine: Making call to close driver server
	I0610 09:40:56.483508    3624 main.go:141] libmachine: (multinode-826000) Calling .Close
	I0610 09:40:56.483683    3624 main.go:141] libmachine: (multinode-826000) DBG | Closing plugin on server side
	I0610 09:40:56.483686    3624 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:40:56.483706    3624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:40:56.627448    3624 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0610 09:40:56.632023    3624 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0610 09:40:56.638524    3624 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 09:40:56.645303    3624 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 09:40:56.649262    3624 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0610 09:40:56.657839    3624 command_runner.go:130] > pod/storage-provisioner created
	I0610 09:40:56.662121    3624 main.go:141] libmachine: Making call to close driver server
	I0610 09:40:56.662135    3624 main.go:141] libmachine: (multinode-826000) Calling .Close
	I0610 09:40:56.662284    3624 main.go:141] libmachine: (multinode-826000) DBG | Closing plugin on server side
	I0610 09:40:56.662301    3624 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:40:56.662303    3624 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:40:56.662316    3624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:40:56.662325    3624 main.go:141] libmachine: Making call to close driver server
	I0610 09:40:56.662335    3624 main.go:141] libmachine: (multinode-826000) Calling .Close
	I0610 09:40:56.662457    3624 main.go:141] libmachine: (multinode-826000) DBG | Closing plugin on server side
	I0610 09:40:56.662490    3624 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:40:56.662506    3624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:40:56.662520    3624 kapi.go:59] client config for multinode-826000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.key", CAFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x257f980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 09:40:56.683447    3624 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 09:40:56.662770    3624 node_ready.go:35] waiting up to 6m0s for node "multinode-826000" to be "Ready" ...
	I0610 09:40:56.726322    3624 addons.go:499] enable addons completed in 932.251592ms: enabled=[default-storageclass storage-provisioner]
	I0610 09:40:56.726365    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:40:56.726372    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:56.726378    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:56.726384    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:56.728168    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:40:56.728179    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:56.728185    3624 round_trippers.go:580]     Audit-Id: 44d514cb-522e-403c-89ce-02b835ce899a
	I0610 09:40:56.728194    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:56.728199    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:56.728205    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:56.728210    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:56.728214    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:56 GMT
	I0610 09:40:56.728294    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:40:57.230930    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:40:57.230956    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:57.230969    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:57.230983    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:57.233919    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:40:57.233938    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:57.233945    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:57.233952    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:57.233968    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:57 GMT
	I0610 09:40:57.233975    3624 round_trippers.go:580]     Audit-Id: 74b8eb8b-d838-4fa1-997d-edfa4ff66d67
	I0610 09:40:57.233982    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:57.233989    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:57.234097    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:40:57.729457    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:40:57.729471    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:57.729477    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:57.729484    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:57.730856    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:40:57.730867    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:57.730872    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:57.730881    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:57 GMT
	I0610 09:40:57.730887    3624 round_trippers.go:580]     Audit-Id: 61d7c875-c11e-4211-b1d4-fe70c7b7635b
	I0610 09:40:57.730897    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:57.730901    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:57.730906    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:57.730992    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:40:58.229993    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:40:58.230020    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:58.230032    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:58.230041    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:58.232903    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:40:58.232921    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:58.232930    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:58 GMT
	I0610 09:40:58.232938    3624 round_trippers.go:580]     Audit-Id: 67e7cc1c-fe30-4655-8883-b3133e28d61f
	I0610 09:40:58.232944    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:58.232952    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:58.232959    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:58.232966    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:58.233059    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:40:58.728971    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:40:58.728996    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:58.729009    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:58.729019    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:58.732101    3624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:40:58.732117    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:58.732125    3624 round_trippers.go:580]     Audit-Id: 63d9ff5b-a35b-4857-9467-33d6e2820d3d
	I0610 09:40:58.732155    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:58.732163    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:58.732168    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:58.732174    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:58.732179    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:58 GMT
	I0610 09:40:58.732318    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:40:58.732554    3624 node_ready.go:58] node "multinode-826000" has status "Ready":"False"
	I0610 09:40:59.230033    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:40:59.230056    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:59.230070    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:59.230081    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:59.233090    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:40:59.233110    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:59.233128    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:59.233135    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:59.233143    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:59 GMT
	I0610 09:40:59.233149    3624 round_trippers.go:580]     Audit-Id: 31e2b18a-f271-4828-9f9e-942f998cb982
	I0610 09:40:59.233156    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:59.233165    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:59.233489    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:40:59.728831    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:40:59.728844    3624 round_trippers.go:469] Request Headers:
	I0610 09:40:59.728851    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:40:59.728856    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:40:59.730727    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:40:59.730739    3624 round_trippers.go:577] Response Headers:
	I0610 09:40:59.730747    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:40:59 GMT
	I0610 09:40:59.730755    3624 round_trippers.go:580]     Audit-Id: 3c41c8d4-98a6-4948-aa83-caa77bc1f3d9
	I0610 09:40:59.730767    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:40:59.730774    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:40:59.730780    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:40:59.730786    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:40:59.730911    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:00.229255    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:00.229278    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:00.229291    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:00.229302    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:00.231944    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:00.231964    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:00.231985    3624 round_trippers.go:580]     Audit-Id: 2dc16382-2d5b-41e7-85b0-504ec32bfb16
	I0610 09:41:00.231998    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:00.232009    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:00.232020    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:00.232026    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:00.232033    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:00 GMT
	I0610 09:41:00.232135    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:00.729771    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:00.729791    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:00.729803    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:00.729818    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:00.732693    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:00.732711    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:00.732740    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:00.732751    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:00 GMT
	I0610 09:41:00.732758    3624 round_trippers.go:580]     Audit-Id: 861d95ad-f69a-4dce-b4d9-da3214217b47
	I0610 09:41:00.732764    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:00.732770    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:00.732779    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:00.732875    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:00.733131    3624 node_ready.go:58] node "multinode-826000" has status "Ready":"False"
	I0610 09:41:01.229488    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:01.229508    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:01.229521    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:01.229532    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:01.231895    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:01.231908    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:01.231918    3624 round_trippers.go:580]     Audit-Id: 4083f012-927d-4be2-9fe9-577ec7f230a8
	I0610 09:41:01.231928    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:01.231937    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:01.231950    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:01.231958    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:01.231967    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:01 GMT
	I0610 09:41:01.232154    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:01.728897    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:01.728911    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:01.728918    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:01.728923    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:01.730599    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:01.730622    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:01.730634    3624 round_trippers.go:580]     Audit-Id: 7a34f840-79f2-4909-9be0-adc21d7c8ac6
	I0610 09:41:01.730642    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:01.730650    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:01.730657    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:01.730671    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:01.730677    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:01 GMT
	I0610 09:41:01.730827    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:02.228792    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:02.228806    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:02.228813    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:02.228819    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:02.230546    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:02.230560    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:02.230585    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:02.230611    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:02.230623    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:02 GMT
	I0610 09:41:02.230630    3624 round_trippers.go:580]     Audit-Id: 05d29400-8ae8-47c0-b5c5-73eca8133fef
	I0610 09:41:02.230638    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:02.230643    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:02.230720    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:02.729205    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:02.729225    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:02.729237    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:02.729248    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:02.732100    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:02.732113    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:02.732123    3624 round_trippers.go:580]     Audit-Id: 03b482fb-7be9-4cc4-93e7-cdc728324287
	I0610 09:41:02.732141    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:02.732149    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:02.732157    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:02.732165    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:02.732171    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:02 GMT
	I0610 09:41:02.732291    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:03.228950    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:03.228964    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:03.228973    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:03.228981    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:03.231257    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:03.231267    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:03.231276    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:03.231293    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:03.231304    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:03.231309    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:03 GMT
	I0610 09:41:03.231315    3624 round_trippers.go:580]     Audit-Id: 9fd10fba-94fa-4b06-bf2d-45268542e5a0
	I0610 09:41:03.231319    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:03.231532    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:03.231724    3624 node_ready.go:58] node "multinode-826000" has status "Ready":"False"
	I0610 09:41:03.729686    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:03.729707    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:03.729719    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:03.729729    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:03.732731    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:03.732748    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:03.732763    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:03.732771    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:03.732780    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:03 GMT
	I0610 09:41:03.732787    3624 round_trippers.go:580]     Audit-Id: 20382463-16b7-466c-863b-8df6ca51debf
	I0610 09:41:03.732793    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:03.732800    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:03.732895    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:04.230031    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:04.230053    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:04.230066    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:04.230076    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:04.234153    3624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 09:41:04.234166    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:04.234172    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:04.234182    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:04.234187    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:04.234192    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:04.234197    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:04 GMT
	I0610 09:41:04.234202    3624 round_trippers.go:580]     Audit-Id: 82f3bfd4-832e-464a-9d8d-dbecd3c446ce
	I0610 09:41:04.234371    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:04.729999    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:04.730020    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:04.730033    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:04.730048    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:04.733091    3624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:41:04.733115    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:04.733128    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:04 GMT
	I0610 09:41:04.733135    3624 round_trippers.go:580]     Audit-Id: b76c4725-94e7-422c-a399-feb719f8d954
	I0610 09:41:04.733142    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:04.733149    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:04.733164    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:04.733170    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:04.733295    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:05.230379    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:05.230399    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:05.230411    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:05.230421    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:05.233526    3624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:41:05.233540    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:05.233552    3624 round_trippers.go:580]     Audit-Id: f458fd98-ff2a-415d-b7b7-4958d5e6eaa5
	I0610 09:41:05.233563    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:05.233571    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:05.233577    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:05.233583    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:05.233591    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:05 GMT
	I0610 09:41:05.233706    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"312","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 09:41:05.233964    3624 node_ready.go:58] node "multinode-826000" has status "Ready":"False"
	I0610 09:41:05.729309    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:05.729332    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:05.729345    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:05.729360    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:05.732270    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:05.732290    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:05.732299    3624 round_trippers.go:580]     Audit-Id: 20044f9b-138f-4d40-a857-d9c86db03d5d
	I0610 09:41:05.732306    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:05.732313    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:05.732321    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:05.732328    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:05.732336    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:05 GMT
	I0610 09:41:05.732659    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 09:41:05.732913    3624 node_ready.go:49] node "multinode-826000" has status "Ready":"True"
	I0610 09:41:05.732926    3624 node_ready.go:38] duration metric: took 9.006645594s waiting for node "multinode-826000" to be "Ready" ...
	I0610 09:41:05.732938    3624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:41:05.732996    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:41:05.733002    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:05.733010    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:05.733017    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:05.735749    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:05.735759    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:05.735766    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:05.735775    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:05 GMT
	I0610 09:41:05.735781    3624 round_trippers.go:580]     Audit-Id: 2de5d539-adde-4ebf-98ec-3d939cf933e8
	I0610 09:41:05.735787    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:05.735791    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:05.735802    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:05.736401    3624 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"391"},"items":[{"metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"390","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53971 chars]
	I0610 09:41:05.738721    3624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:05.738763    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:41:05.738767    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:05.738774    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:05.738780    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:05.742305    3624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:41:05.742314    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:05.742320    3624 round_trippers.go:580]     Audit-Id: fb566626-2b21-43a3-b5c6-558e7498adf4
	I0610 09:41:05.742324    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:05.742329    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:05.742334    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:05.742342    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:05.742347    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:05 GMT
	I0610 09:41:05.742435    3624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"390","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0610 09:41:05.742698    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:05.742705    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:05.742711    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:05.742716    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:05.744224    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:05.744234    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:05.744245    3624 round_trippers.go:580]     Audit-Id: f460036f-e1bd-4caf-90f5-9484867aa5e9
	I0610 09:41:05.744254    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:05.744259    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:05.744263    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:05.744270    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:05.744289    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:05 GMT
	I0610 09:41:05.744409    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 09:41:06.244732    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:41:06.244746    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:06.244753    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:06.244758    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:06.246938    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:06.246949    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:06.246955    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:06.246973    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:06 GMT
	I0610 09:41:06.246982    3624 round_trippers.go:580]     Audit-Id: 5697a670-f826-49d0-985d-cc30ec841c20
	I0610 09:41:06.246987    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:06.246992    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:06.246996    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:06.247067    3624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"390","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0610 09:41:06.247344    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:06.247350    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:06.247360    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:06.247365    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:06.250290    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:06.250298    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:06.250304    3624 round_trippers.go:580]     Audit-Id: d8085e76-aead-44aa-80aa-7796dd0a7fcc
	I0610 09:41:06.250309    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:06.250313    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:06.250318    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:06.250323    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:06.250328    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:06 GMT
	I0610 09:41:06.250397    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 09:41:06.744734    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:41:06.744746    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:06.744753    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:06.744759    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:06.746789    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:06.746803    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:06.746809    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:06.746827    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:06.746837    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:06 GMT
	I0610 09:41:06.746843    3624 round_trippers.go:580]     Audit-Id: 40b75b95-f54a-4818-b290-994663c1d255
	I0610 09:41:06.746850    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:06.746855    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:06.746992    3624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"390","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0610 09:41:06.747285    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:06.747292    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:06.747298    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:06.747303    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:06.748609    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:06.748617    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:06.748625    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:06.748629    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:06.748634    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:06 GMT
	I0610 09:41:06.748640    3624 round_trippers.go:580]     Audit-Id: c039540e-b70b-48dd-b5a7-4df520e5031a
	I0610 09:41:06.748644    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:06.748649    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:06.748808    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 09:41:07.245889    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:41:07.245911    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.245924    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.245934    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.249169    3624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:41:07.249187    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.249196    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.249202    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.249211    3624 round_trippers.go:580]     Audit-Id: d421d69c-c62b-4b3c-929a-26da9a2ada67
	I0610 09:41:07.249219    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.249225    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.249233    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.249321    3624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"400","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0610 09:41:07.249690    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:07.249700    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.249708    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.249715    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.251265    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:07.251274    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.251280    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.251308    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.251317    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.251323    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.251328    3624 round_trippers.go:580]     Audit-Id: 34e23937-9dac-4c6c-a4e3-769692d968ed
	I0610 09:41:07.251334    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.251434    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 09:41:07.251615    3624 pod_ready.go:92] pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace has status "Ready":"True"
	I0610 09:41:07.251624    3624 pod_ready.go:81] duration metric: took 1.512897903s waiting for pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.251630    3624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.251663    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-826000
	I0610 09:41:07.251667    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.251673    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.251681    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.252915    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:07.252925    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.252931    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.252938    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.252946    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.252951    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.252957    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.252961    3624 round_trippers.go:580]     Audit-Id: 4ea82815-0e69-493a-9b07-fbcc04b7306e
	I0610 09:41:07.253065    3624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-826000","namespace":"kube-system","uid":"9b124acd-926c-431e-bc35-6b845e46eefa","resourceVersion":"281","creationTimestamp":"2023-06-10T16:40:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"4257ff4fa7ee28e8b93d5e2345c387ba","kubernetes.io/config.mirror":"4257ff4fa7ee28e8b93d5e2345c387ba","kubernetes.io/config.seen":"2023-06-10T16:40:35.743576396Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0610 09:41:07.253292    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:07.253299    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.253305    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.253311    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.254482    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:07.254493    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.254501    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.254511    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.254518    3624 round_trippers.go:580]     Audit-Id: 288c99b2-c7c0-44bb-b97d-abb2647bea94
	I0610 09:41:07.254525    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.254533    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.254551    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.254634    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 09:41:07.254806    3624 pod_ready.go:92] pod "etcd-multinode-826000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:41:07.254813    3624 pod_ready.go:81] duration metric: took 3.177703ms waiting for pod "etcd-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.254820    3624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.254845    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-826000
	I0610 09:41:07.254849    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.254854    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.254860    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.256028    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:07.256037    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.256045    3624 round_trippers.go:580]     Audit-Id: cbd61487-7a5d-4c53-961b-6aa62985d169
	I0610 09:41:07.256057    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.256065    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.256073    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.256081    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.256087    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.256169    3624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-826000","namespace":"kube-system","uid":"f3b403ee-f6c6-47cb-baf3-3c15231b7625","resourceVersion":"290","creationTimestamp":"2023-06-10T16:40:40Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"376ee319583f65c2f2f990eb64ecbee8","kubernetes.io/config.mirror":"376ee319583f65c2f2f990eb64ecbee8","kubernetes.io/config.seen":"2023-06-10T16:40:35.743576953Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0610 09:41:07.256393    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:07.256399    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.256405    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.256411    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.257473    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:07.257482    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.257489    3624 round_trippers.go:580]     Audit-Id: 6ec14074-33cf-447a-b433-1a13b37970a9
	I0610 09:41:07.257496    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.257506    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.257512    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.257517    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.257525    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.257607    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 09:41:07.257793    3624 pod_ready.go:92] pod "kube-apiserver-multinode-826000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:41:07.257800    3624 pod_ready.go:81] duration metric: took 2.974851ms waiting for pod "kube-apiserver-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.257808    3624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.257842    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-826000
	I0610 09:41:07.257846    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.257852    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.257860    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.259046    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:07.259055    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.259060    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.259065    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.259070    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.259076    3624 round_trippers.go:580]     Audit-Id: ba65f424-fb66-4575-b0fa-570af0e09adb
	I0610 09:41:07.259080    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.259085    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.259210    3624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-826000","namespace":"kube-system","uid":"bc079029-af76-412a-b16a-e3bd76a3354a","resourceVersion":"287","creationTimestamp":"2023-06-10T16:40:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dadf7af017919599a45f7ef25c850049","kubernetes.io/config.mirror":"dadf7af017919599a45f7ef25c850049","kubernetes.io/config.seen":"2023-06-10T16:40:35.743573226Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0610 09:41:07.259459    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:07.259466    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.259472    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.259477    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.260623    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:07.260633    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.260644    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.260653    3624 round_trippers.go:580]     Audit-Id: 38eb86c8-ec8b-4f80-b5af-df6b0bca51e0
	I0610 09:41:07.260661    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.260668    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.260676    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.260684    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.260736    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 09:41:07.260895    3624 pod_ready.go:92] pod "kube-controller-manager-multinode-826000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:41:07.260902    3624 pod_ready.go:81] duration metric: took 3.089254ms waiting for pod "kube-controller-manager-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.260910    3624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7dxj9" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.260935    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7dxj9
	I0610 09:41:07.260939    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.260945    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.260951    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.262242    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:07.262252    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.262260    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.262267    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.262272    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.262277    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.262282    3624 round_trippers.go:580]     Audit-Id: f36912d4-546c-4be5-93f6-6ba4a11e3e5a
	I0610 09:41:07.262289    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.262372    3624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7dxj9","generateName":"kube-proxy-","namespace":"kube-system","uid":"52c8c8ff-4db3-4df4-9a64-dfa1f0221f20","resourceVersion":"372","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a54e86e6-ea1b-4f1a-a115-3032051cb5cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a54e86e6-ea1b-4f1a-a115-3032051cb5cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5531 chars]
	I0610 09:41:07.330214    3624 request.go:628] Waited for 67.589568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:07.330280    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:07.330291    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.330303    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.330314    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.333173    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:07.333190    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.333201    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.333232    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.333242    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.333249    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.333256    3624 round_trippers.go:580]     Audit-Id: 5986ebc6-2723-4498-9fb9-3f2c73c5ed8b
	I0610 09:41:07.333263    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.333369    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 09:41:07.333615    3624 pod_ready.go:92] pod "kube-proxy-7dxj9" in "kube-system" namespace has status "Ready":"True"
	I0610 09:41:07.333626    3624 pod_ready.go:81] duration metric: took 72.710918ms waiting for pod "kube-proxy-7dxj9" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.333663    3624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.529972    3624 request.go:628] Waited for 196.260068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-826000
	I0610 09:41:07.530014    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-826000
	I0610 09:41:07.530019    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.530044    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.530051    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.531512    3624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:41:07.531523    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.531532    3624 round_trippers.go:580]     Audit-Id: 66bc64e3-7cdb-432a-9142-9a626b9c98bc
	I0610 09:41:07.531542    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.531548    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.531553    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.531558    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.531563    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.531653    3624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-826000","namespace":"kube-system","uid":"49d5bdcb-168b-4719-917a-80bd9859ccb6","resourceVersion":"283","creationTimestamp":"2023-06-10T16:40:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"07dc3f9536175f6e9e243e6c2d78c2e4","kubernetes.io/config.mirror":"07dc3f9536175f6e9e243e6c2d78c2e4","kubernetes.io/config.seen":"2023-06-10T16:40:42.865304771Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0610 09:41:07.730014    3624 request.go:628] Waited for 198.109678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:07.730077    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:41:07.730089    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.730102    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.730113    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.733292    3624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:41:07.733313    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.733323    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.733334    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.733341    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.733348    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.733355    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.733362    3624 round_trippers.go:580]     Audit-Id: 72005d92-ea49-4d88-afbf-85f3b20f3f86
	I0610 09:41:07.733449    3624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 09:41:07.733699    3624 pod_ready.go:92] pod "kube-scheduler-multinode-826000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:41:07.733710    3624 pod_ready.go:81] duration metric: took 400.038282ms waiting for pod "kube-scheduler-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:41:07.733719    3624 pod_ready.go:38] duration metric: took 2.000773133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:41:07.733740    3624 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:41:07.733810    3624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:41:07.744069    3624 command_runner.go:130] > 1673
	I0610 09:41:07.744088    3624 api_server.go:72] duration metric: took 11.429642582s to wait for apiserver process to appear ...
	I0610 09:41:07.744097    3624 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:41:07.744112    3624 api_server.go:253] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0610 09:41:07.747763    3624 api_server.go:279] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0610 09:41:07.747800    3624 round_trippers.go:463] GET https://192.168.64.12:8443/version
	I0610 09:41:07.747805    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.747814    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.747820    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.748547    3624 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 09:41:07.748558    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.748570    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.748576    3624 round_trippers.go:580]     Content-Length: 263
	I0610 09:41:07.748581    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.748587    3624 round_trippers.go:580]     Audit-Id: 3a413967-c140-44cd-9f01-65f13ff3d38e
	I0610 09:41:07.748594    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.748599    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.748604    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.748614    3624 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.2",
	  "gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
	  "gitTreeState": "clean",
	  "buildDate": "2023-05-17T14:13:28Z",
	  "goVersion": "go1.20.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 09:41:07.748669    3624 api_server.go:141] control plane version: v1.27.2
	I0610 09:41:07.748677    3624 api_server.go:131] duration metric: took 4.575561ms to wait for apiserver health ...
	I0610 09:41:07.748685    3624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:41:07.929692    3624 request.go:628] Waited for 180.957518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:41:07.929756    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:41:07.929791    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:07.929804    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:07.929815    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:07.933434    3624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:41:07.933444    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:07.933450    3624 round_trippers.go:580]     Audit-Id: abe8c8d7-ec76-4af6-ace2-702de38c6032
	I0610 09:41:07.933455    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:07.933462    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:07.933470    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:07.933484    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:07.933490    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:07 GMT
	I0610 09:41:07.933916    3624 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"400","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54087 chars]
	I0610 09:41:07.935179    3624 system_pods.go:59] 8 kube-system pods found
	I0610 09:41:07.935191    3624 system_pods.go:61] "coredns-5d78c9869d-r9sjl" [d3e6fbc7-ad9e-47a1-8592-9a22062f0845] Running
	I0610 09:41:07.935196    3624 system_pods.go:61] "etcd-multinode-826000" [9b124acd-926c-431e-bc35-6b845e46eefa] Running
	I0610 09:41:07.935200    3624 system_pods.go:61] "kindnet-9r8df" [39c3c671-53e3-4745-ad44-d4d88bac2e7b] Running
	I0610 09:41:07.935204    3624 system_pods.go:61] "kube-apiserver-multinode-826000" [f3b403ee-f6c6-47cb-baf3-3c15231b7625] Running
	I0610 09:41:07.935207    3624 system_pods.go:61] "kube-controller-manager-multinode-826000" [bc079029-af76-412a-b16a-e3bd76a3354a] Running
	I0610 09:41:07.935211    3624 system_pods.go:61] "kube-proxy-7dxj9" [52c8c8ff-4db3-4df4-9a64-dfa1f0221f20] Running
	I0610 09:41:07.935214    3624 system_pods.go:61] "kube-scheduler-multinode-826000" [49d5bdcb-168b-4719-917a-80bd9859ccb6] Running
	I0610 09:41:07.935218    3624 system_pods.go:61] "storage-provisioner" [045816f3-b7b8-4909-8dc7-42d6d795adb1] Running
	I0610 09:41:07.935222    3624 system_pods.go:74] duration metric: took 186.533528ms to wait for pod list to return data ...
	I0610 09:41:07.935228    3624 default_sa.go:34] waiting for default service account to be created ...
	I0610 09:41:08.130880    3624 request.go:628] Waited for 195.598503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
	I0610 09:41:08.130937    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
	I0610 09:41:08.130945    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:08.130987    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:08.131000    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:08.133805    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:08.133819    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:08.133828    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:08.133835    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:08.133842    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:08.133848    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:08.133854    3624 round_trippers.go:580]     Content-Length: 261
	I0610 09:41:08.133860    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:08 GMT
	I0610 09:41:08.133866    3624 round_trippers.go:580]     Audit-Id: 1478d9ff-5bc9-457c-a7fc-546e98f21649
	I0610 09:41:08.133880    3624 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b8380443-31b8-47c2-9195-5a380347a27a","resourceVersion":"318","creationTimestamp":"2023-06-10T16:40:55Z"}}]}
	I0610 09:41:08.134033    3624 default_sa.go:45] found service account: "default"
	I0610 09:41:08.134044    3624 default_sa.go:55] duration metric: took 198.812223ms for default service account to be created ...
	I0610 09:41:08.134054    3624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 09:41:08.330188    3624 request.go:628] Waited for 196.080484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:41:08.330262    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:41:08.330270    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:08.330282    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:08.330293    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:08.333919    3624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:41:08.333939    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:08.333951    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:08.333975    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:08.333987    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:08.334001    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:08.334024    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:08 GMT
	I0610 09:41:08.334038    3624 round_trippers.go:580]     Audit-Id: 300d2ffe-e01f-48b9-8719-26502184b95b
	I0610 09:41:08.334811    3624 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"400","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54087 chars]
	I0610 09:41:08.336116    3624 system_pods.go:86] 8 kube-system pods found
	I0610 09:41:08.336126    3624 system_pods.go:89] "coredns-5d78c9869d-r9sjl" [d3e6fbc7-ad9e-47a1-8592-9a22062f0845] Running
	I0610 09:41:08.336138    3624 system_pods.go:89] "etcd-multinode-826000" [9b124acd-926c-431e-bc35-6b845e46eefa] Running
	I0610 09:41:08.336142    3624 system_pods.go:89] "kindnet-9r8df" [39c3c671-53e3-4745-ad44-d4d88bac2e7b] Running
	I0610 09:41:08.336146    3624 system_pods.go:89] "kube-apiserver-multinode-826000" [f3b403ee-f6c6-47cb-baf3-3c15231b7625] Running
	I0610 09:41:08.336150    3624 system_pods.go:89] "kube-controller-manager-multinode-826000" [bc079029-af76-412a-b16a-e3bd76a3354a] Running
	I0610 09:41:08.336154    3624 system_pods.go:89] "kube-proxy-7dxj9" [52c8c8ff-4db3-4df4-9a64-dfa1f0221f20] Running
	I0610 09:41:08.336158    3624 system_pods.go:89] "kube-scheduler-multinode-826000" [49d5bdcb-168b-4719-917a-80bd9859ccb6] Running
	I0610 09:41:08.336161    3624 system_pods.go:89] "storage-provisioner" [045816f3-b7b8-4909-8dc7-42d6d795adb1] Running
	I0610 09:41:08.336170    3624 system_pods.go:126] duration metric: took 202.110769ms to wait for k8s-apps to be running ...
	I0610 09:41:08.336177    3624 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 09:41:08.336247    3624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:41:08.345674    3624 system_svc.go:56] duration metric: took 9.49427ms WaitForService to wait for kubelet.
	I0610 09:41:08.345699    3624 kubeadm.go:581] duration metric: took 12.031255778s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 09:41:08.345738    3624 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:41:08.531133    3624 request.go:628] Waited for 185.350279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes
	I0610 09:41:08.531207    3624 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
	I0610 09:41:08.531227    3624 round_trippers.go:469] Request Headers:
	I0610 09:41:08.531241    3624 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:41:08.531253    3624 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:41:08.533718    3624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:41:08.533731    3624 round_trippers.go:577] Response Headers:
	I0610 09:41:08.533741    3624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:41:08.533750    3624 round_trippers.go:580]     Content-Type: application/json
	I0610 09:41:08.533759    3624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:41:08.533770    3624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:41:08.533797    3624 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:41:08 GMT
	I0610 09:41:08.533815    3624 round_trippers.go:580]     Audit-Id: 0df243cd-a7dd-4e82-b9e3-85a2f3a5cf73
	I0610 09:41:08.533890    3624 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"385","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0610 09:41:08.534185    3624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0610 09:41:08.534206    3624 node_conditions.go:123] node cpu capacity is 2
	I0610 09:41:08.534216    3624 node_conditions.go:105] duration metric: took 188.474714ms to run NodePressure ...
	I0610 09:41:08.534225    3624 start.go:228] waiting for startup goroutines ...
	I0610 09:41:08.534250    3624 start.go:233] waiting for cluster config update ...
	I0610 09:41:08.534262    3624 start.go:242] writing updated cluster config ...
	I0610 09:41:08.534687    3624 ssh_runner.go:195] Run: rm -f paused
	I0610 09:41:08.573431    3624 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 09:41:08.594601    3624 out.go:177] 
	W0610 09:41:08.630983    3624 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 09:41:08.652701    3624 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 09:41:08.674918    3624 out.go:177] * Done! kubectl is now configured to use "multinode-826000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:40:23 UTC, ends at Sat 2023-06-10 16:41:09 UTC. --
	Jun 10 16:40:57 multinode-826000 dockerd[853]: time="2023-06-10T16:40:57.601223091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:40:59 multinode-826000 cri-dockerd[1083]: time="2023-06-10T16:40:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe54448abb1ac756b8acd602e4cb7778a0ef3e31d14d674dea17d7f205006d42/resolv.conf as [nameserver 192.168.64.1]"
	Jun 10 16:41:02 multinode-826000 cri-dockerd[1083]: time="2023-06-10T16:41:02Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20230511-dc714da8: Status: Downloaded newer image for kindest/kindnetd:v20230511-dc714da8"
	Jun 10 16:41:02 multinode-826000 dockerd[853]: time="2023-06-10T16:41:02.290833493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:41:02 multinode-826000 dockerd[853]: time="2023-06-10T16:41:02.291253453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:02 multinode-826000 dockerd[853]: time="2023-06-10T16:41:02.291320865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:41:02 multinode-826000 dockerd[853]: time="2023-06-10T16:41:02.291444935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.045880107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.045994951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.046101964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.046125424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.046806030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.046882631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.046967719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.046996519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:06 multinode-826000 cri-dockerd[1083]: time="2023-06-10T16:41:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b080919c1ecfc540ea60cd8ee0aa306b45c3956094278b808909473fd20d83f4/resolv.conf as [nameserver 192.168.64.1]"
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.442915872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.443096803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.443139275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.443202108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:06 multinode-826000 cri-dockerd[1083]: time="2023-06-10T16:41:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2494e4985fe38d312dc6db5b9a756ae710273fb790d991321f68f697d1d26b5c/resolv.conf as [nameserver 192.168.64.1]"
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.531506522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.532365991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.532438172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:41:06 multinode-826000 dockerd[853]: time="2023-06-10T16:41:06.532559306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID
	12619bc2bf572       ead0a4a53df89                                                                              4 seconds ago       Running             coredns                   0                   2494e4985fe38
	e628a3dfc251b       6e38f40d628db                                                                              4 seconds ago       Running             storage-provisioner       0                   b080919c1ecfc
	dcf36c339d8e9       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974   8 seconds ago       Running             kindnet-cni               0                   fe54448abb1ac
	3246cc4a932c7       b8aa50768fd67                                                                              13 seconds ago      Running             kube-proxy                0                   f4c3162aaa5c0
	ba32349cda752       86b6af7dd652c                                                                              33 seconds ago      Running             etcd                      0                   2d94b625d191b
	c0054420e3b8f       89e70da428d29                                                                              34 seconds ago      Running             kube-scheduler            0                   1e876d1d39ca0
	ae72b9818103a       ac2b7465ebba9                                                                              34 seconds ago      Running             kube-controller-manager   0                   8f3a0f3eaddd1
	0a2f2c979d7b0       c5b13e4f7806d                                                                              34 seconds ago      Running             kube-apiserver            0                   2023590fd394b
	
	* 
	* ==> coredns [12619bc2bf57] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55077 - 62156 "HINFO IN 783487967199058609.7483377405974833132. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.004630964s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-826000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-826000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=multinode-826000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_40_43_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:40:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-826000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:41:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:41:05 +0000   Sat, 10 Jun 2023 16:40:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:41:05 +0000   Sat, 10 Jun 2023 16:40:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:41:05 +0000   Sat, 10 Jun 2023 16:40:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:41:05 +0000   Sat, 10 Jun 2023 16:41:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.12
	  Hostname:    multinode-826000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 26945a74017c42a39dfa547f74776178
	  System UUID:                39eb11ee-0000-0000-b579-f01898ef957c
	  Boot ID:                    22dba630-f875-4a48-86c6-d6917ed6cb91
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-r9sjl                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15s
	  kube-system                 etcd-multinode-826000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         30s
	  kube-system                 kindnet-9r8df                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15s
	  kube-system                 kube-apiserver-multinode-826000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-controller-manager-multinode-826000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-proxy-7dxj9                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 kube-scheduler-multinode-826000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12s   kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28s   kubelet          Node multinode-826000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet          Node multinode-826000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet          Node multinode-826000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s   node-controller  Node multinode-826000 event: Registered Node multinode-826000 in Controller
	  Normal  NodeReady                5s    kubelet          Node multinode-826000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +4.594084] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.616449] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.037851] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.901601] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.109967] systemd-fstab-generator[514]: Ignoring "noauto" for root device
	[  +0.087722] systemd-fstab-generator[525]: Ignoring "noauto" for root device
	[  +0.700763] systemd-fstab-generator[741]: Ignoring "noauto" for root device
	[  +0.218959] systemd-fstab-generator[781]: Ignoring "noauto" for root device
	[  +0.089459] systemd-fstab-generator[799]: Ignoring "noauto" for root device
	[  +0.099994] systemd-fstab-generator[837]: Ignoring "noauto" for root device
	[  +1.251802] kauditd_printk_skb: 30 callbacks suppressed
	[  +0.158628] systemd-fstab-generator[996]: Ignoring "noauto" for root device
	[  +0.090866] systemd-fstab-generator[1007]: Ignoring "noauto" for root device
	[  +0.088234] systemd-fstab-generator[1018]: Ignoring "noauto" for root device
	[  +0.090649] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +0.117371] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +4.409424] systemd-fstab-generator[1332]: Ignoring "noauto" for root device
	[  +0.837042] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.821889] systemd-fstab-generator[2220]: Ignoring "noauto" for root device
	[Jun10 16:41] kauditd_printk_skb: 16 callbacks suppressed
	
	* 
	* ==> etcd [ba32349cda75] <==
	* {"level":"info","ts":"2023-06-10T16:40:37.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 switched to configuration voters=(9888510509761246144)"}
	{"level":"info","ts":"2023-06-10T16:40:37.797Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","added-peer-id":"893b0beac40933c0","added-peer-peer-urls":["https://192.168.64.12:2380"]}
	{"level":"info","ts":"2023-06-10T16:40:37.812Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-10T16:40:37.818Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-06-10T16:40:37.818Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-06-10T16:40:37.818Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"893b0beac40933c0","initial-advertise-peer-urls":["https://192.168.64.12:2380"],"listen-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.12:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-10T16:40:37.818Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgPreVoteResp from 893b0beac40933c0 at term 1"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgVoteResp from 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became leader at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 893b0beac40933c0 elected leader 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.987Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"893b0beac40933c0","local-member-attributes":"{Name:multinode-826000 ClientURLs:[https://192.168.64.12:2379]}","request-path":"/0/members/893b0beac40933c0/attributes","cluster-id":"51ecae2d8304f353","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:40:37.987Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:40:37.990Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T16:40:37.990Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:40:37.991Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.12:2379"}
	{"level":"info","ts":"2023-06-10T16:40:37.991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:40:37.994Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  16:41:10 up 0 min,  0 users,  load average: 0.96, 0.22, 0.07
	Linux multinode-826000 5.10.57 #1 SMP Wed Jun 7 04:45:40 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [dcf36c339d8e] <==
	* I0610 16:41:02.415980       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 16:41:02.416124       1 main.go:107] hostIP = 192.168.64.12
	podIP = 192.168.64.12
	I0610 16:41:02.416216       1 main.go:116] setting mtu 1500 for CNI 
	I0610 16:41:02.416259       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 16:41:02.416285       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 16:41:02.724012       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:41:02.724049       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [0a2f2c979d7b] <==
	* I0610 16:40:39.850165       1 shared_informer.go:318] Caches are synced for configmaps
	I0610 16:40:39.850217       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:40:39.850689       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 16:40:39.852143       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 16:40:39.857759       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 16:40:39.932638       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0610 16:40:39.940349       1 controller.go:624] quota admission added evaluator for: namespaces
	E0610 16:40:39.954084       1 controller.go:150] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0610 16:40:39.982714       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 16:40:40.518256       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:40:40.769138       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 16:40:40.774214       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 16:40:40.774224       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 16:40:41.096727       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:40:41.118853       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 16:40:41.249349       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 16:40:41.257177       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.64.12]
	I0610 16:40:41.257772       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 16:40:41.260603       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 16:40:41.830523       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 16:40:42.779571       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 16:40:42.790584       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 16:40:42.796232       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 16:40:55.607378       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0610 16:40:55.678898       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [ae72b9818103] <==
	* I0610 16:40:55.682025       1 shared_informer.go:318] Caches are synced for PVC protection
	I0610 16:40:55.682087       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0610 16:40:55.682463       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0610 16:40:55.682528       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0610 16:40:55.682969       1 shared_informer.go:318] Caches are synced for job
	I0610 16:40:55.685211       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0610 16:40:55.687715       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0610 16:40:55.717624       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-zhp88"
	I0610 16:40:55.749857       1 shared_informer.go:318] Caches are synced for attach detach
	I0610 16:40:55.751495       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-r9sjl"
	I0610 16:40:55.772147       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:40:55.830662       1 shared_informer.go:318] Caches are synced for taint
	I0610 16:40:55.830803       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0610 16:40:55.830866       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0610 16:40:55.830888       1 taint_manager.go:211] "Sending events to api server"
	I0610 16:40:55.831285       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-826000"
	I0610 16:40:55.831308       1 node_lifecycle_controller.go:1027] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0610 16:40:55.831403       1 event.go:307] "Event occurred" object="multinode-826000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-826000 event: Registered Node multinode-826000 in Controller"
	I0610 16:40:55.842686       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:40:55.896813       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0610 16:40:55.933788       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-zhp88"
	I0610 16:40:56.194791       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:40:56.231277       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:40:56.231338       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0610 16:41:05.833553       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	* 
	* ==> kube-proxy [3246cc4a932c] <==
	* I0610 16:40:57.738817       1 node.go:141] Successfully retrieved node IP: 192.168.64.12
	I0610 16:40:57.738885       1 server_others.go:110] "Detected node IP" address="192.168.64.12"
	I0610 16:40:57.738899       1 server_others.go:551] "Using iptables proxy"
	I0610 16:40:57.763801       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:40:57.763883       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:40:57.764218       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:40:57.764791       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:40:57.764844       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:40:57.766097       1 config.go:188] "Starting service config controller"
	I0610 16:40:57.766529       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:40:57.767401       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:40:57.767453       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:40:57.766609       1 config.go:315] "Starting node config controller"
	I0610 16:40:57.768401       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:40:57.867666       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:40:57.867833       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:40:57.868467       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c0054420e3b8] <==
	* W0610 16:40:39.958453       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 16:40:39.958559       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 16:40:39.958682       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:40:39.958774       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:40:39.958852       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 16:40:39.958938       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 16:40:39.959025       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:40:39.959136       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 16:40:39.959229       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 16:40:39.959323       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 16:40:39.959392       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:40:39.959472       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:40:39.959730       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 16:40:39.959782       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 16:40:39.959898       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:40:39.959973       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 16:40:39.960084       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:40:39.960134       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 16:40:40.792534       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:40:40.792622       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 16:40:40.962518       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:40:40.962536       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:40:40.981767       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 16:40:40.981852       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 16:40:41.339329       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:40:23 UTC, ends at Sat 2023-06-10 16:41:11 UTC. --
	Jun 10 16:40:55 multinode-826000 kubelet[2239]: I0610 16:40:55.700874    2239 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrz2b\" (UniqueName: \"kubernetes.io/projected/52c8c8ff-4db3-4df4-9a64-dfa1f0221f20-kube-api-access-hrz2b\") pod \"kube-proxy-7dxj9\" (UID: \"52c8c8ff-4db3-4df4-9a64-dfa1f0221f20\") " pod="kube-system/kube-proxy-7dxj9"
	Jun 10 16:40:55 multinode-826000 kubelet[2239]: I0610 16:40:55.700922    2239 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhj7n\" (UniqueName: \"kubernetes.io/projected/39c3c671-53e3-4745-ad44-d4d88bac2e7b-kube-api-access-hhj7n\") pod \"kindnet-9r8df\" (UID: \"39c3c671-53e3-4745-ad44-d4d88bac2e7b\") " pod="kube-system/kindnet-9r8df"
	Jun 10 16:40:55 multinode-826000 kubelet[2239]: I0610 16:40:55.700954    2239 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/39c3c671-53e3-4745-ad44-d4d88bac2e7b-cni-cfg\") pod \"kindnet-9r8df\" (UID: \"39c3c671-53e3-4745-ad44-d4d88bac2e7b\") " pod="kube-system/kindnet-9r8df"
	Jun 10 16:40:55 multinode-826000 kubelet[2239]: I0610 16:40:55.700975    2239 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39c3c671-53e3-4745-ad44-d4d88bac2e7b-lib-modules\") pod \"kindnet-9r8df\" (UID: \"39c3c671-53e3-4745-ad44-d4d88bac2e7b\") " pod="kube-system/kindnet-9r8df"
	Jun 10 16:40:55 multinode-826000 kubelet[2239]: I0610 16:40:55.701058    2239 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52c8c8ff-4db3-4df4-9a64-dfa1f0221f20-xtables-lock\") pod \"kube-proxy-7dxj9\" (UID: \"52c8c8ff-4db3-4df4-9a64-dfa1f0221f20\") " pod="kube-system/kube-proxy-7dxj9"
	Jun 10 16:40:55 multinode-826000 kubelet[2239]: I0610 16:40:55.779275    2239 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 10 16:40:55 multinode-826000 kubelet[2239]: I0610 16:40:55.780782    2239 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 10 16:40:56 multinode-826000 kubelet[2239]: E0610 16:40:56.807535    2239 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jun 10 16:40:56 multinode-826000 kubelet[2239]: E0610 16:40:56.807576    2239 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jun 10 16:40:56 multinode-826000 kubelet[2239]: E0610 16:40:56.807689    2239 projected.go:198] Error preparing data for projected volume kube-api-access-hrz2b for pod kube-system/kube-proxy-7dxj9: failed to sync configmap cache: timed out waiting for the condition
	Jun 10 16:40:56 multinode-826000 kubelet[2239]: E0610 16:40:56.807620    2239 projected.go:198] Error preparing data for projected volume kube-api-access-hhj7n for pod kube-system/kindnet-9r8df: failed to sync configmap cache: timed out waiting for the condition
	Jun 10 16:40:56 multinode-826000 kubelet[2239]: E0610 16:40:56.807772    2239 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52c8c8ff-4db3-4df4-9a64-dfa1f0221f20-kube-api-access-hrz2b podName:52c8c8ff-4db3-4df4-9a64-dfa1f0221f20 nodeName:}" failed. No retries permitted until 2023-06-10 16:40:57.307750344 +0000 UTC m=+14.550049721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hrz2b" (UniqueName: "kubernetes.io/projected/52c8c8ff-4db3-4df4-9a64-dfa1f0221f20-kube-api-access-hrz2b") pod "kube-proxy-7dxj9" (UID: "52c8c8ff-4db3-4df4-9a64-dfa1f0221f20") : failed to sync configmap cache: timed out waiting for the condition
	Jun 10 16:40:56 multinode-826000 kubelet[2239]: E0610 16:40:56.807814    2239 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39c3c671-53e3-4745-ad44-d4d88bac2e7b-kube-api-access-hhj7n podName:39c3c671-53e3-4745-ad44-d4d88bac2e7b nodeName:}" failed. No retries permitted until 2023-06-10 16:40:57.30779609 +0000 UTC m=+14.550095512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hhj7n" (UniqueName: "kubernetes.io/projected/39c3c671-53e3-4745-ad44-d4d88bac2e7b-kube-api-access-hhj7n") pod "kindnet-9r8df" (UID: "39c3c671-53e3-4745-ad44-d4d88bac2e7b") : failed to sync configmap cache: timed out waiting for the condition
	Jun 10 16:40:59 multinode-826000 kubelet[2239]: I0610 16:40:59.935851    2239 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe54448abb1ac756b8acd602e4cb7778a0ef3e31d14d674dea17d7f205006d42"
	Jun 10 16:41:02 multinode-826000 kubelet[2239]: I0610 16:41:02.969616    2239 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9r8df" podStartSLOduration=5.710943133 podCreationTimestamp="2023-06-10 16:40:55 +0000 UTC" firstStartedPulling="2023-06-10 16:40:59.938966956 +0000 UTC m=+17.181266315" lastFinishedPulling="2023-06-10 16:41:02.197611048 +0000 UTC m=+19.439910404" observedRunningTime="2023-06-10 16:41:02.969392763 +0000 UTC m=+20.211692123" watchObservedRunningTime="2023-06-10 16:41:02.969587222 +0000 UTC m=+20.211886583"
	Jun 10 16:41:02 multinode-826000 kubelet[2239]: I0610 16:41:02.969732    2239 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7dxj9" podStartSLOduration=7.969717158 podCreationTimestamp="2023-06-10 16:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 16:40:58.036505744 +0000 UTC m=+15.278805105" watchObservedRunningTime="2023-06-10 16:41:02.969717158 +0000 UTC m=+20.212016520"
	Jun 10 16:41:05 multinode-826000 kubelet[2239]: I0610 16:41:05.665334    2239 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jun 10 16:41:05 multinode-826000 kubelet[2239]: I0610 16:41:05.681784    2239 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 16:41:05 multinode-826000 kubelet[2239]: I0610 16:41:05.681909    2239 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 16:41:05 multinode-826000 kubelet[2239]: I0610 16:41:05.784981    2239 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/045816f3-b7b8-4909-8dc7-42d6d795adb1-tmp\") pod \"storage-provisioner\" (UID: \"045816f3-b7b8-4909-8dc7-42d6d795adb1\") " pod="kube-system/storage-provisioner"
	Jun 10 16:41:05 multinode-826000 kubelet[2239]: I0610 16:41:05.785169    2239 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8l92\" (UniqueName: \"kubernetes.io/projected/045816f3-b7b8-4909-8dc7-42d6d795adb1-kube-api-access-k8l92\") pod \"storage-provisioner\" (UID: \"045816f3-b7b8-4909-8dc7-42d6d795adb1\") " pod="kube-system/storage-provisioner"
	Jun 10 16:41:05 multinode-826000 kubelet[2239]: I0610 16:41:05.785227    2239 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume\") pod \"coredns-5d78c9869d-r9sjl\" (UID: \"d3e6fbc7-ad9e-47a1-8592-9a22062f0845\") " pod="kube-system/coredns-5d78c9869d-r9sjl"
	Jun 10 16:41:05 multinode-826000 kubelet[2239]: I0610 16:41:05.785322    2239 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tfxr\" (UniqueName: \"kubernetes.io/projected/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-kube-api-access-6tfxr\") pod \"coredns-5d78c9869d-r9sjl\" (UID: \"d3e6fbc7-ad9e-47a1-8592-9a22062f0845\") " pod="kube-system/coredns-5d78c9869d-r9sjl"
	Jun 10 16:41:07 multinode-826000 kubelet[2239]: I0610 16:41:07.015778    2239 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.015751968 podCreationTimestamp="2023-06-10 16:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 16:41:07.015241632 +0000 UTC m=+24.257540993" watchObservedRunningTime="2023-06-10 16:41:07.015751968 +0000 UTC m=+24.258051327"
	Jun 10 16:41:07 multinode-826000 kubelet[2239]: I0610 16:41:07.015841    2239 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-r9sjl" podStartSLOduration=12.015828599 podCreationTimestamp="2023-06-10 16:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 16:41:06.998505962 +0000 UTC m=+24.240805328" watchObservedRunningTime="2023-06-10 16:41:07.015828599 +0000 UTC m=+24.258127965"
	
	* 
	* ==> storage-provisioner [e628a3dfc251] <==
	* I0610 16:41:06.540908       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:41:06.559295       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:41:06.559507       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:41:06.571970       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:41:06.573276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-826000_65f7510d-39b4-4e3d-9761-a740afd6d163!
	I0610 16:41:06.584504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"610b228d-9310-4cdc-8468-8ce5be660bed", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-826000_65f7510d-39b4-4e3d-9761-a740afd6d163 became leader
	I0610 16:41:06.674123       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-826000_65f7510d-39b4-4e3d-9761-a740afd6d163!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-826000 -n multinode-826000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-826000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (2.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (8.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 stop
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-826000 stop: (8.202122302s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-826000 status: exit status 7 (50.339299ms)

                                                
                                                
-- stdout --
	multinode-826000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-826000 status --alsologtostderr: exit status 7 (49.818005ms)

                                                
                                                
-- stdout --
	multinode-826000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:41:19.957398    3689 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:41:19.957572    3689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:41:19.957578    3689 out.go:309] Setting ErrFile to fd 2...
	I0610 09:41:19.957583    3689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:41:19.957693    3689 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:41:19.957871    3689 out.go:303] Setting JSON to false
	I0610 09:41:19.957893    3689 mustload.go:65] Loading cluster: multinode-826000
	I0610 09:41:19.957939    3689 notify.go:220] Checking for updates...
	I0610 09:41:19.958158    3689 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:41:19.958171    3689 status.go:255] checking status of multinode-826000 ...
	I0610 09:41:19.958504    3689 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:41:19.958570    3689 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:41:19.965173    3689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51204
	I0610 09:41:19.965471    3689 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:41:19.965883    3689 main.go:141] libmachine: Using API Version  1
	I0610 09:41:19.965892    3689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:41:19.966095    3689 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:41:19.966186    3689 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:41:19.966265    3689 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:41:19.966329    3689 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3636
	I0610 09:41:19.967176    3689 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid 3636 missing from process table
	I0610 09:41:19.967202    3689 status.go:330] multinode-826000 host status = "Stopped" (err=<nil>)
	I0610 09:41:19.967210    3689 status.go:343] host is not running, skipping remaining checks
	I0610 09:41:19.967215    3689 status.go:257] multinode-826000 status: &{Name:multinode-826000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-826000 status --alsologtostderr": multinode-826000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-826000 status --alsologtostderr": multinode-826000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000: exit status 7 (51.270748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-826000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (8.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-826000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0610 09:42:04.645968    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-826000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m16.41712506s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 status --alsologtostderr
multinode_test.go:366: status says both hosts are not running: args "out/minikube-darwin-amd64 -p multinode-826000 status --alsologtostderr": 
-- stdout --
	multinode-826000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:42:36.476357    3732 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:42:36.476579    3732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:42:36.476586    3732 out.go:309] Setting ErrFile to fd 2...
	I0610 09:42:36.476591    3732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:42:36.476705    3732 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:42:36.476889    3732 out.go:303] Setting JSON to false
	I0610 09:42:36.476910    3732 mustload.go:65] Loading cluster: multinode-826000
	I0610 09:42:36.476956    3732 notify.go:220] Checking for updates...
	I0610 09:42:36.477159    3732 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:42:36.477173    3732 status.go:255] checking status of multinode-826000 ...
	I0610 09:42:36.477524    3732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:36.477567    3732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:36.484326    3732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51252
	I0610 09:42:36.484629    3732 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:36.485052    3732 main.go:141] libmachine: Using API Version  1
	I0610 09:42:36.485063    3732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:36.485272    3732 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:36.485383    3732 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:42:36.485462    3732 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:42:36.485530    3732 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3708
	I0610 09:42:36.486420    3732 status.go:330] multinode-826000 host status = "Running" (err=<nil>)
	I0610 09:42:36.486435    3732 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:42:36.486674    3732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:36.486696    3732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:36.493308    3732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51254
	I0610 09:42:36.493602    3732 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:36.493970    3732 main.go:141] libmachine: Using API Version  1
	I0610 09:42:36.493996    3732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:36.494207    3732 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:36.494312    3732 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:42:36.494392    3732 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:42:36.494663    3732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:36.494686    3732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:36.501300    3732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51256
	I0610 09:42:36.501615    3732 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:36.501953    3732 main.go:141] libmachine: Using API Version  1
	I0610 09:42:36.501967    3732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:36.502176    3732 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:36.502287    3732 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:42:36.502420    3732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 09:42:36.502442    3732 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:42:36.502528    3732 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:42:36.502626    3732 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:42:36.502714    3732 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:42:36.502795    3732 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:42:36.551156    3732 ssh_runner.go:195] Run: systemctl --version
	I0610 09:42:36.554534    3732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:42:36.564389    3732 kubeconfig.go:92] found "multinode-826000" server: "https://192.168.64.12:8443"
	I0610 09:42:36.564444    3732 api_server.go:166] Checking apiserver status ...
	I0610 09:42:36.564523    3732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:42:36.572879    3732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1614/cgroup
	I0610 09:42:36.578696    3732 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod376ee319583f65c2f2f990eb64ecbee8/492eebc8d7c905a88ec9f7e5b0f8fb52fd56b8ce0bb9b042028c6f8efa5faaf2"
	I0610 09:42:36.578754    3732 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod376ee319583f65c2f2f990eb64ecbee8/492eebc8d7c905a88ec9f7e5b0f8fb52fd56b8ce0bb9b042028c6f8efa5faaf2/freezer.state
	I0610 09:42:36.589661    3732 api_server.go:204] freezer state: "THAWED"
	I0610 09:42:36.589681    3732 api_server.go:253] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0610 09:42:36.598793    3732 api_server.go:279] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0610 09:42:36.598809    3732 status.go:421] multinode-826000 apiserver status = Running (err=<nil>)
	I0610 09:42:36.598815    3732 status.go:257] multinode-826000 status: &{Name:multinode-826000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:370: status says both kubelets are not running: args "out/minikube-darwin-amd64 -p multinode-826000 status --alsologtostderr": 
-- stdout --
	multinode-826000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:42:36.476357    3732 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:42:36.476579    3732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:42:36.476586    3732 out.go:309] Setting ErrFile to fd 2...
	I0610 09:42:36.476591    3732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:42:36.476705    3732 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:42:36.476889    3732 out.go:303] Setting JSON to false
	I0610 09:42:36.476910    3732 mustload.go:65] Loading cluster: multinode-826000
	I0610 09:42:36.476956    3732 notify.go:220] Checking for updates...
	I0610 09:42:36.477159    3732 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:42:36.477173    3732 status.go:255] checking status of multinode-826000 ...
	I0610 09:42:36.477524    3732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:36.477567    3732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:36.484326    3732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51252
	I0610 09:42:36.484629    3732 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:36.485052    3732 main.go:141] libmachine: Using API Version  1
	I0610 09:42:36.485063    3732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:36.485272    3732 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:36.485383    3732 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:42:36.485462    3732 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:42:36.485530    3732 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3708
	I0610 09:42:36.486420    3732 status.go:330] multinode-826000 host status = "Running" (err=<nil>)
	I0610 09:42:36.486435    3732 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:42:36.486674    3732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:36.486696    3732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:36.493308    3732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51254
	I0610 09:42:36.493602    3732 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:36.493970    3732 main.go:141] libmachine: Using API Version  1
	I0610 09:42:36.493996    3732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:36.494207    3732 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:36.494312    3732 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:42:36.494392    3732 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:42:36.494663    3732 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:36.494686    3732 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:36.501300    3732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51256
	I0610 09:42:36.501615    3732 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:36.501953    3732 main.go:141] libmachine: Using API Version  1
	I0610 09:42:36.501967    3732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:36.502176    3732 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:36.502287    3732 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:42:36.502420    3732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 09:42:36.502442    3732 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:42:36.502528    3732 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:42:36.502626    3732 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:42:36.502714    3732 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:42:36.502795    3732 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:42:36.551156    3732 ssh_runner.go:195] Run: systemctl --version
	I0610 09:42:36.554534    3732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:42:36.564389    3732 kubeconfig.go:92] found "multinode-826000" server: "https://192.168.64.12:8443"
	I0610 09:42:36.564444    3732 api_server.go:166] Checking apiserver status ...
	I0610 09:42:36.564523    3732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:42:36.572879    3732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1614/cgroup
	I0610 09:42:36.578696    3732 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod376ee319583f65c2f2f990eb64ecbee8/492eebc8d7c905a88ec9f7e5b0f8fb52fd56b8ce0bb9b042028c6f8efa5faaf2"
	I0610 09:42:36.578754    3732 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod376ee319583f65c2f2f990eb64ecbee8/492eebc8d7c905a88ec9f7e5b0f8fb52fd56b8ce0bb9b042028c6f8efa5faaf2/freezer.state
	I0610 09:42:36.589661    3732 api_server.go:204] freezer state: "THAWED"
	I0610 09:42:36.589681    3732 api_server.go:253] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0610 09:42:36.598793    3732 api_server.go:279] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0610 09:42:36.598809    3732 status.go:421] multinode-826000 apiserver status = Running (err=<nil>)
	I0610 09:42:36.598815    3732 status.go:257] multinode-826000 status: &{Name:multinode-826000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:387: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-826000 logs -n 25: (2.871276446s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:38 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- exec          | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | -- nslookup kubernetes.io            |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- exec          | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | -- nslookup kubernetes.default       |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000                  | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | -- exec  -- nslookup                 |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| node    | add -p multinode-826000 -v 3         | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-826000 node stop m03       | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	| node    | multinode-826000 node start          | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | m03 --alsologtostderr                |                  |         |         |                     |                     |
	| node    | list -p multinode-826000             | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	| stop    | -p multinode-826000                  | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT | 10 Jun 23 09:40 PDT |
	| start   | -p multinode-826000                  | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT | 10 Jun 23 09:41 PDT |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | list -p multinode-826000             | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:41 PDT |                     |
	| node    | multinode-826000 node delete         | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:41 PDT |                     |
	|         | m03                                  |                  |         |         |                     |                     |
	| stop    | multinode-826000 stop                | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:41 PDT | 10 Jun 23 09:41 PDT |
	| start   | -p multinode-826000                  | multinode-826000 | jenkins | v1.30.1 | 10 Jun 23 09:41 PDT | 10 Jun 23 09:42 PDT |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	|         | --driver=hyperkit                    |                  |         |         |                     |                     |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:41:20
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:41:20.058742    3695 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:41:20.058916    3695 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:41:20.058922    3695 out.go:309] Setting ErrFile to fd 2...
	I0610 09:41:20.058926    3695 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:41:20.059033    3695 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:41:20.060426    3695 out.go:303] Setting JSON to false
	I0610 09:41:20.079485    3695 start.go:127] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2450,"bootTime":1686412830,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0610 09:41:20.079578    3695 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:41:20.101686    3695 out.go:177] * [multinode-826000] minikube v1.30.1 on Darwin 13.4
	I0610 09:41:20.143564    3695 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:41:20.143633    3695 notify.go:220] Checking for updates...
	I0610 09:41:20.186493    3695 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:41:20.209658    3695 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 09:41:20.232371    3695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:41:20.253540    3695 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	I0610 09:41:20.274588    3695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:41:20.296167    3695 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:41:20.296806    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:41:20.296847    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:41:20.304603    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51210
	I0610 09:41:20.304950    3695 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:41:20.305375    3695 main.go:141] libmachine: Using API Version  1
	I0610 09:41:20.305385    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:41:20.305598    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:41:20.305712    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:41:20.305881    3695 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:41:20.306150    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:41:20.306193    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:41:20.312778    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51212
	I0610 09:41:20.313102    3695 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:41:20.313447    3695 main.go:141] libmachine: Using API Version  1
	I0610 09:41:20.313463    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:41:20.313656    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:41:20.313749    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:41:20.341605    3695 out.go:177] * Using the hyperkit driver based on existing profile
	I0610 09:41:20.383444    3695 start.go:297] selected driver: hyperkit
	I0610 09:41:20.383463    3695 start.go:875] validating driver "hyperkit" against &{Name:multinode-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:41:20.383645    3695 start.go:886] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:41:20.383753    3695 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:41:20.383959    3695 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16578-1235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 09:41:20.391897    3695 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0610 09:41:20.395582    3695 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:41:20.395606    3695 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 09:41:20.398025    3695 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 09:41:20.398057    3695 cni.go:84] Creating CNI manager for ""
	I0610 09:41:20.398068    3695 cni.go:136] 1 nodes found, recommending kindnet
	I0610 09:41:20.398077    3695 start_flags.go:319] config:
	{Name:multinode-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-826000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:41:20.398266    3695 iso.go:125] acquiring lock: {Name:mkc028968ad126cece35ec994c5f11699b30bc34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:41:20.440623    3695 out.go:177] * Starting control plane node multinode-826000 in cluster multinode-826000
	I0610 09:41:20.461493    3695 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:41:20.461594    3695 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0610 09:41:20.461633    3695 cache.go:57] Caching tarball of preloaded images
	I0610 09:41:20.461835    3695 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 09:41:20.461853    3695 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:41:20.462009    3695 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/config.json ...
	I0610 09:41:20.462743    3695 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:41:20.462791    3695 start.go:364] acquiring machines lock for multinode-826000: {Name:mk73e5861e2a32aaad6eda5ce405a92c74d96949 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:41:20.462920    3695 start.go:368] acquired machines lock for "multinode-826000" in 97.638µs
	I0610 09:41:20.462954    3695 start.go:96] Skipping create...Using existing machine configuration
	I0610 09:41:20.462966    3695 fix.go:55] fixHost starting: 
	I0610 09:41:20.463411    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:41:20.463464    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:41:20.470768    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51214
	I0610 09:41:20.471116    3695 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:41:20.471487    3695 main.go:141] libmachine: Using API Version  1
	I0610 09:41:20.471505    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:41:20.471721    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:41:20.471817    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:41:20.471919    3695 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:41:20.472007    3695 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:41:20.472080    3695 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3636
	I0610 09:41:20.472931    3695 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid 3636 missing from process table
	I0610 09:41:20.472963    3695 fix.go:103] recreateIfNeeded on multinode-826000: state=Stopped err=<nil>
	I0610 09:41:20.472979    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	W0610 09:41:20.473067    3695 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 09:41:20.516578    3695 out.go:177] * Restarting existing hyperkit VM for "multinode-826000" ...
	I0610 09:41:20.539401    3695 main.go:141] libmachine: (multinode-826000) Calling .Start
	I0610 09:41:20.539659    3695 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:41:20.539719    3695 main.go:141] libmachine: (multinode-826000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid
	I0610 09:41:20.540754    3695 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid 3636 missing from process table
	I0610 09:41:20.540767    3695 main.go:141] libmachine: (multinode-826000) DBG | pid 3636 is in state "Stopped"
	I0610 09:41:20.540783    3695 main.go:141] libmachine: (multinode-826000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid...
	I0610 09:41:20.540931    3695 main.go:141] libmachine: (multinode-826000) DBG | Using UUID 39ebe0dc-07ad-11ee-b579-f01898ef957c
	I0610 09:41:20.658909    3695 main.go:141] libmachine: (multinode-826000) DBG | Generated MAC fa:20:3f:84:ae:92
	I0610 09:41:20.658935    3695 main.go:141] libmachine: (multinode-826000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000
	I0610 09:41:20.659060    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"39ebe0dc-07ad-11ee-b579-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003e1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 09:41:20.659091    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"39ebe0dc-07ad-11ee-b579-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003e1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage", Initrd:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 09:41:20.659195    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "39ebe0dc-07ad-11ee-b579-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/multinode-826000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/tty,log=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage,/Users/jenkins/minikube-integration/1657
8-1235/.minikube/machines/multinode-826000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000"}
	I0610 09:41:20.659241    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 39ebe0dc-07ad-11ee-b579-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/multinode-826000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/tty,log=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/console-ring -f kexec,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/bzimage,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000"
	I0610 09:41:20.659258    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 09:41:20.660658    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 DEBUG: hyperkit: Pid is 3708
	I0610 09:41:20.661153    3695 main.go:141] libmachine: (multinode-826000) DBG | Attempt 0
	I0610 09:41:20.661189    3695 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:41:20.661219    3695 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3708
	I0610 09:41:20.663329    3695 main.go:141] libmachine: (multinode-826000) DBG | Searching for fa:20:3f:84:ae:92 in /var/db/dhcpd_leases ...
	I0610 09:41:20.663395    3695 main.go:141] libmachine: (multinode-826000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0610 09:41:20.663433    3695 main.go:141] libmachine: (multinode-826000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:fa:20:3f:84:ae:92 ID:1,fa:20:3f:84:ae:92 Lease:0x6485f8f8}
	I0610 09:41:20.663456    3695 main.go:141] libmachine: (multinode-826000) DBG | Found match: fa:20:3f:84:ae:92
	I0610 09:41:20.663489    3695 main.go:141] libmachine: (multinode-826000) DBG | IP: 192.168.64.12
	I0610 09:41:20.663538    3695 main.go:141] libmachine: (multinode-826000) Calling .GetConfigRaw
	I0610 09:41:20.664179    3695 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:41:20.664334    3695 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/config.json ...
	I0610 09:41:20.664621    3695 machine.go:88] provisioning docker machine ...
	I0610 09:41:20.664630    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:41:20.664749    3695 main.go:141] libmachine: (multinode-826000) Calling .GetMachineName
	I0610 09:41:20.664850    3695 buildroot.go:166] provisioning hostname "multinode-826000"
	I0610 09:41:20.664861    3695 main.go:141] libmachine: (multinode-826000) Calling .GetMachineName
	I0610 09:41:20.664961    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:20.665070    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:41:20.665189    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:20.665277    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:20.665365    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:41:20.665522    3695 main.go:141] libmachine: Using SSH client type: native
	I0610 09:41:20.665872    3695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:41:20.665882    3695 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-826000 && echo "multinode-826000" | sudo tee /etc/hostname
	I0610 09:41:20.667416    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 09:41:20.723318    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 09:41:20.724269    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 09:41:20.724295    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 09:41:20.724308    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 09:41:20.724323    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 09:41:21.083829    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 09:41:21.083846    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 09:41:21.187837    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 09:41:21.187853    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 09:41:21.187884    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 09:41:21.187903    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 09:41:21.188798    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 09:41:21.188811    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 09:41:25.675992    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 09:41:25.676034    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 09:41:25.676045    3695 main.go:141] libmachine: (multinode-826000) DBG | 2023/06/10 09:41:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 09:41:55.776204    3695 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-826000
	
	I0610 09:41:55.776224    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:55.776349    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:41:55.776451    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:55.776534    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:55.776624    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:41:55.776759    3695 main.go:141] libmachine: Using SSH client type: native
	I0610 09:41:55.777078    3695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:41:55.777090    3695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-826000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-826000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-826000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:41:55.868016    3695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:41:55.868035    3695 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1235/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1235/.minikube}
	I0610 09:41:55.868057    3695 buildroot.go:174] setting up certificates
	I0610 09:41:55.868068    3695 provision.go:83] configureAuth start
	I0610 09:41:55.868076    3695 main.go:141] libmachine: (multinode-826000) Calling .GetMachineName
	I0610 09:41:55.868214    3695 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:41:55.868314    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:55.868396    3695 provision.go:138] copyHostCerts
	I0610 09:41:55.868434    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem
	I0610 09:41:55.868497    3695 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem, removing ...
	I0610 09:41:55.868505    3695 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem
	I0610 09:41:55.868640    3695 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem (1078 bytes)
	I0610 09:41:55.868845    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem
	I0610 09:41:55.868898    3695 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem, removing ...
	I0610 09:41:55.868902    3695 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem
	I0610 09:41:55.869111    3695 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem (1123 bytes)
	I0610 09:41:55.869260    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem
	I0610 09:41:55.869302    3695 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem, removing ...
	I0610 09:41:55.869307    3695 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem
	I0610 09:41:55.869370    3695 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem (1679 bytes)
	I0610 09:41:55.869524    3695 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem org=jenkins.multinode-826000 san=[192.168.64.12 192.168.64.12 localhost 127.0.0.1 minikube multinode-826000]
	I0610 09:41:56.025678    3695 provision.go:172] copyRemoteCerts
	I0610 09:41:56.025736    3695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:41:56.025751    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:56.025873    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:41:56.025958    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:56.026040    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:41:56.026136    3695 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:41:56.074433    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 09:41:56.074507    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0610 09:41:56.090912    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 09:41:56.090967    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:41:56.107093    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 09:41:56.107152    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:41:56.123137    3695 provision.go:86] duration metric: configureAuth took 255.058527ms
	I0610 09:41:56.123149    3695 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:41:56.123277    3695 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:41:56.123290    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:41:56.123425    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:56.123520    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:41:56.123610    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:56.123701    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:56.123776    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:41:56.123897    3695 main.go:141] libmachine: Using SSH client type: native
	I0610 09:41:56.124196    3695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:41:56.124205    3695 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:41:56.208617    3695 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:41:56.208629    3695 buildroot.go:70] root file system type: tmpfs
	I0610 09:41:56.208708    3695 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:41:56.208721    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:56.208849    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:41:56.208955    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:56.209044    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:56.209130    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:41:56.209267    3695 main.go:141] libmachine: Using SSH client type: native
	I0610 09:41:56.209559    3695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:41:56.209605    3695 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:41:56.302334    3695 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:41:56.302357    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:56.302483    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:41:56.302589    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:56.302678    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:56.302770    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:41:56.302911    3695 main.go:141] libmachine: Using SSH client type: native
	I0610 09:41:56.303223    3695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:41:56.303238    3695 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:41:56.894551    3695 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:41:56.894569    3695 machine.go:91] provisioned docker machine in 36.230068884s
	I0610 09:41:56.894580    3695 start.go:300] post-start starting for "multinode-826000" (driver="hyperkit")
	I0610 09:41:56.894595    3695 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:41:56.894607    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:41:56.894817    3695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:41:56.894831    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:56.894929    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:41:56.895033    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:56.895120    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:41:56.895213    3695 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:41:56.944408    3695 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:41:56.946918    3695 command_runner.go:130] > NAME=Buildroot
	I0610 09:41:56.946929    3695 command_runner.go:130] > VERSION=2021.02.12-1-ge0c6143-dirty
	I0610 09:41:56.946938    3695 command_runner.go:130] > ID=buildroot
	I0610 09:41:56.946945    3695 command_runner.go:130] > VERSION_ID=2021.02.12
	I0610 09:41:56.946952    3695 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0610 09:41:56.947034    3695 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:41:56.947046    3695 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1235/.minikube/addons for local assets ...
	I0610 09:41:56.947124    3695 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1235/.minikube/files for local assets ...
	I0610 09:41:56.947298    3695 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem -> 16822.pem in /etc/ssl/certs
	I0610 09:41:56.947305    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem -> /etc/ssl/certs/16822.pem
	I0610 09:41:56.947481    3695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 09:41:56.953167    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem --> /etc/ssl/certs/16822.pem (1708 bytes)
	I0610 09:41:56.968705    3695 start.go:303] post-start completed in 74.110029ms
	I0610 09:41:56.968719    3695 fix.go:57] fixHost completed within 36.505883809s
	I0610 09:41:56.968735    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:56.968864    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:41:56.968967    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:56.969053    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:56.969136    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:41:56.969259    3695 main.go:141] libmachine: Using SSH client type: native
	I0610 09:41:56.969562    3695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.12 22 <nil> <nil>}
	I0610 09:41:56.969570    3695 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 09:41:57.054134    3695 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686415316.963838713
	
	I0610 09:41:57.054145    3695 fix.go:207] guest clock: 1686415316.963838713
	I0610 09:41:57.054150    3695 fix.go:220] Guest: 2023-06-10 09:41:56.963838713 -0700 PDT Remote: 2023-06-10 09:41:56.968724 -0700 PDT m=+36.941733245 (delta=-4.885287ms)
	I0610 09:41:57.054168    3695 fix.go:191] guest clock delta is within tolerance: -4.885287ms
	I0610 09:41:57.054172    3695 start.go:83] releasing machines lock for "multinode-826000", held for 36.591370828s
	I0610 09:41:57.054190    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:41:57.054318    3695 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:41:57.054416    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:41:57.054741    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:41:57.054849    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:41:57.054951    3695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:41:57.054980    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:57.054987    3695 ssh_runner.go:195] Run: cat /version.json
	I0610 09:41:57.054999    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:41:57.055098    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:41:57.055114    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:41:57.055218    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:57.055240    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:41:57.055326    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:41:57.055357    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:41:57.055409    3695 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:41:57.055440    3695 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:41:57.142653    3695 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 09:41:57.143562    3695 command_runner.go:130] > {"iso_version": "v1.30.1-1686096373-16019", "kicbase_version": "v0.0.39-1686006988-16632", "minikube_version": "v1.30.1", "commit": "25a6e24452a99fbf54228d85990beeaaccbd5c35"}
	I0610 09:41:57.143738    3695 ssh_runner.go:195] Run: systemctl --version
	I0610 09:41:57.147841    3695 command_runner.go:130] > systemd 247 (247)
	I0610 09:41:57.147860    3695 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0610 09:41:57.148122    3695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 09:41:57.151601    3695 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 09:41:57.151710    3695 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:41:57.151764    3695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 09:41:57.162495    3695 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 09:41:57.162526    3695 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:41:57.162534    3695 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:41:57.162617    3695 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:41:57.174752    3695 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0610 09:41:57.174764    3695 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0610 09:41:57.174776    3695 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0610 09:41:57.174780    3695 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0610 09:41:57.174784    3695 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0610 09:41:57.174788    3695 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0610 09:41:57.174798    3695 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0610 09:41:57.174803    3695 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 09:41:57.174808    3695 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:41:57.175617    3695 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:41:57.175631    3695 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:41:57.175639    3695 start.go:481] detecting cgroup driver to use...
	I0610 09:41:57.175742    3695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:41:57.187400    3695 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 09:41:57.187731    3695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 09:41:57.194679    3695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:41:57.201629    3695 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:41:57.201676    3695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:41:57.208743    3695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:41:57.215709    3695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:41:57.222587    3695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:41:57.229453    3695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:41:57.236619    3695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:41:57.243651    3695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:41:57.249673    3695 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 09:41:57.249850    3695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:41:57.256085    3695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:41:57.337991    3695 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:41:57.350372    3695 start.go:481] detecting cgroup driver to use...
	I0610 09:41:57.350440    3695 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:41:57.362879    3695 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 09:41:57.363079    3695 command_runner.go:130] > [Unit]
	I0610 09:41:57.363086    3695 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 09:41:57.363091    3695 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 09:41:57.363096    3695 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 09:41:57.363101    3695 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 09:41:57.363105    3695 command_runner.go:130] > StartLimitBurst=3
	I0610 09:41:57.363113    3695 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 09:41:57.363117    3695 command_runner.go:130] > [Service]
	I0610 09:41:57.363120    3695 command_runner.go:130] > Type=notify
	I0610 09:41:57.363124    3695 command_runner.go:130] > Restart=on-failure
	I0610 09:41:57.363130    3695 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 09:41:57.363148    3695 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 09:41:57.363153    3695 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 09:41:57.363162    3695 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 09:41:57.363168    3695 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 09:41:57.363173    3695 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 09:41:57.363179    3695 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 09:41:57.363187    3695 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 09:41:57.363193    3695 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 09:41:57.363197    3695 command_runner.go:130] > ExecStart=
	I0610 09:41:57.363210    3695 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 09:41:57.363216    3695 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 09:41:57.363222    3695 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 09:41:57.363229    3695 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 09:41:57.363234    3695 command_runner.go:130] > LimitNOFILE=infinity
	I0610 09:41:57.363238    3695 command_runner.go:130] > LimitNPROC=infinity
	I0610 09:41:57.363242    3695 command_runner.go:130] > LimitCORE=infinity
	I0610 09:41:57.363247    3695 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 09:41:57.363252    3695 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 09:41:57.363255    3695 command_runner.go:130] > TasksMax=infinity
	I0610 09:41:57.363259    3695 command_runner.go:130] > TimeoutStartSec=0
	I0610 09:41:57.363264    3695 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 09:41:57.363269    3695 command_runner.go:130] > Delegate=yes
	I0610 09:41:57.363275    3695 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 09:41:57.363279    3695 command_runner.go:130] > KillMode=process
	I0610 09:41:57.363282    3695 command_runner.go:130] > [Install]
	I0610 09:41:57.363291    3695 command_runner.go:130] > WantedBy=multi-user.target
	I0610 09:41:57.363566    3695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:41:57.375500    3695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:41:57.390651    3695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:41:57.398983    3695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:41:57.407642    3695 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:41:57.434048    3695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:41:57.442548    3695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:41:57.454462    3695 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 09:41:57.454680    3695 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:41:57.456848    3695 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 09:41:57.456963    3695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:41:57.463239    3695 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:41:57.474060    3695 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:41:57.555490    3695 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:41:57.641770    3695 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:41:57.641787    3695 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:41:57.653395    3695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:41:57.733751    3695 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:41:59.003388    3695 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.269623811s)
	I0610 09:41:59.003461    3695 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:41:59.090217    3695 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 09:41:59.189012    3695 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:41:59.275486    3695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:41:59.363901    3695 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 09:41:59.376419    3695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:41:59.473723    3695 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 09:41:59.524281    3695 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 09:41:59.524377    3695 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 09:41:59.528009    3695 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 09:41:59.528021    3695 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 09:41:59.528035    3695 command_runner.go:130] > Device: 16h/22d	Inode: 857         Links: 1
	I0610 09:41:59.528042    3695 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 09:41:59.528053    3695 command_runner.go:130] > Access: 2023-06-10 16:41:59.428599734 +0000
	I0610 09:41:59.528058    3695 command_runner.go:130] > Modify: 2023-06-10 16:41:59.428599734 +0000
	I0610 09:41:59.528062    3695 command_runner.go:130] > Change: 2023-06-10 16:41:59.430623981 +0000
	I0610 09:41:59.528066    3695 command_runner.go:130] >  Birth: -
	I0610 09:41:59.528116    3695 start.go:549] Will wait 60s for crictl version
	I0610 09:41:59.528166    3695 ssh_runner.go:195] Run: which crictl
	I0610 09:41:59.530799    3695 command_runner.go:130] > /usr/bin/crictl
	I0610 09:41:59.531014    3695 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 09:41:59.553977    3695 command_runner.go:130] > Version:  0.1.0
	I0610 09:41:59.553989    3695 command_runner.go:130] > RuntimeName:  docker
	I0610 09:41:59.554049    3695 command_runner.go:130] > RuntimeVersion:  24.0.2
	I0610 09:41:59.554131    3695 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0610 09:41:59.555403    3695 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 09:41:59.555478    3695 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:41:59.571876    3695 command_runner.go:130] > 24.0.2
	I0610 09:41:59.572537    3695 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:41:59.588563    3695 command_runner.go:130] > 24.0.2
	I0610 09:41:59.633401    3695 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 09:41:59.633450    3695 main.go:141] libmachine: (multinode-826000) Calling .GetIP
	I0610 09:41:59.633932    3695 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0610 09:41:59.638033    3695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:41:59.646179    3695 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:41:59.646261    3695 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:41:59.658914    3695 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0610 09:41:59.658926    3695 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0610 09:41:59.658931    3695 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0610 09:41:59.658935    3695 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0610 09:41:59.658938    3695 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0610 09:41:59.658942    3695 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0610 09:41:59.658948    3695 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0610 09:41:59.658953    3695 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 09:41:59.658965    3695 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:41:59.659467    3695 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:41:59.659478    3695 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:41:59.659550    3695 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:41:59.671861    3695 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0610 09:41:59.671872    3695 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0610 09:41:59.671876    3695 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0610 09:41:59.671880    3695 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0610 09:41:59.671884    3695 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0610 09:41:59.671888    3695 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0610 09:41:59.671892    3695 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0610 09:41:59.671896    3695 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 09:41:59.671900    3695 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:41:59.672429    3695 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:41:59.672450    3695 cache_images.go:84] Images are preloaded, skipping loading
	I0610 09:41:59.672533    3695 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 09:41:59.689107    3695 command_runner.go:130] > cgroupfs
	I0610 09:41:59.689672    3695 cni.go:84] Creating CNI manager for ""
	I0610 09:41:59.689680    3695 cni.go:136] 1 nodes found, recommending kindnet
	I0610 09:41:59.689694    3695 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 09:41:59.689715    3695 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.12 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-826000 NodeName:multinode-826000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 09:41:59.689799    3695 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-826000"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 09:41:59.689856    3695 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-826000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 09:41:59.689922    3695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 09:41:59.696327    3695 command_runner.go:130] > kubeadm
	I0610 09:41:59.696333    3695 command_runner.go:130] > kubectl
	I0610 09:41:59.696336    3695 command_runner.go:130] > kubelet
	I0610 09:41:59.696521    3695 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 09:41:59.696572    3695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 09:41:59.703037    3695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0610 09:41:59.714153    3695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 09:41:59.725316    3695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0610 09:41:59.736516    3695 ssh_runner.go:195] Run: grep 192.168.64.12	control-plane.minikube.internal$ /etc/hosts
	I0610 09:41:59.738841    3695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:41:59.746658    3695 certs.go:56] Setting up /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000 for IP: 192.168.64.12
	I0610 09:41:59.746673    3695 certs.go:190] acquiring lock for shared ca certs: {Name:mk1e521581ce58a8d2ad5f887c3da11f6a7a0530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:41:59.746845    3695 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.key
	I0610 09:41:59.746908    3695 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.key
	I0610 09:41:59.746994    3695 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.key
	I0610 09:41:59.747062    3695 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.key.546ed142
	I0610 09:41:59.747122    3695 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.key
	I0610 09:41:59.747131    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 09:41:59.747159    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 09:41:59.747185    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 09:41:59.747204    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 09:41:59.747225    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 09:41:59.747242    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 09:41:59.747264    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 09:41:59.747284    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 09:41:59.747380    3695 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682.pem (1338 bytes)
	W0610 09:41:59.747421    3695 certs.go:433] ignoring /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682_empty.pem, impossibly tiny 0 bytes
	I0610 09:41:59.747432    3695 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 09:41:59.747467    3695 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem (1078 bytes)
	I0610 09:41:59.747499    3695 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem (1123 bytes)
	I0610 09:41:59.747535    3695 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem (1679 bytes)
	I0610 09:41:59.747598    3695 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem (1708 bytes)
	I0610 09:41:59.747634    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:41:59.747653    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682.pem -> /usr/share/ca-certificates/1682.pem
	I0610 09:41:59.747670    3695 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem -> /usr/share/ca-certificates/16822.pem
	I0610 09:41:59.748151    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 09:41:59.764229    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 09:41:59.780139    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 09:41:59.796338    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 09:41:59.812226    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 09:41:59.828076    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 09:41:59.843828    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 09:41:59.860041    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 09:41:59.875903    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 09:41:59.891565    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682.pem --> /usr/share/ca-certificates/1682.pem (1338 bytes)
	I0610 09:41:59.907079    3695 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem --> /usr/share/ca-certificates/16822.pem (1708 bytes)
	I0610 09:41:59.922458    3695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 09:41:59.933695    3695 ssh_runner.go:195] Run: openssl version
	I0610 09:41:59.937048    3695 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0610 09:41:59.937172    3695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 09:41:59.944117    3695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:41:59.946953    3695 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:41:59.947133    3695 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:41:59.947197    3695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:41:59.950721    3695 command_runner.go:130] > b5213941
	I0610 09:41:59.950758    3695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 09:41:59.958031    3695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1682.pem && ln -fs /usr/share/ca-certificates/1682.pem /etc/ssl/certs/1682.pem"
	I0610 09:41:59.964996    3695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1682.pem
	I0610 09:41:59.967840    3695 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 16:27 /usr/share/ca-certificates/1682.pem
	I0610 09:41:59.967940    3695 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 16:27 /usr/share/ca-certificates/1682.pem
	I0610 09:41:59.967983    3695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1682.pem
	I0610 09:41:59.971297    3695 command_runner.go:130] > 51391683
	I0610 09:41:59.971513    3695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1682.pem /etc/ssl/certs/51391683.0"
	I0610 09:41:59.978525    3695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16822.pem && ln -fs /usr/share/ca-certificates/16822.pem /etc/ssl/certs/16822.pem"
	I0610 09:41:59.985563    3695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16822.pem
	I0610 09:41:59.988406    3695 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 16:27 /usr/share/ca-certificates/16822.pem
	I0610 09:41:59.988500    3695 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 16:27 /usr/share/ca-certificates/16822.pem
	I0610 09:41:59.988535    3695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16822.pem
	I0610 09:41:59.991891    3695 command_runner.go:130] > 3ec20f2e
	I0610 09:41:59.992118    3695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16822.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 09:41:59.999173    3695 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 09:42:00.001810    3695 command_runner.go:130] > ca.crt
	I0610 09:42:00.001819    3695 command_runner.go:130] > ca.key
	I0610 09:42:00.001828    3695 command_runner.go:130] > healthcheck-client.crt
	I0610 09:42:00.001832    3695 command_runner.go:130] > healthcheck-client.key
	I0610 09:42:00.001836    3695 command_runner.go:130] > peer.crt
	I0610 09:42:00.001841    3695 command_runner.go:130] > peer.key
	I0610 09:42:00.001847    3695 command_runner.go:130] > server.crt
	I0610 09:42:00.001853    3695 command_runner.go:130] > server.key
	I0610 09:42:00.001934    3695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 09:42:00.005695    3695 command_runner.go:130] > Certificate will not expire
	I0610 09:42:00.005732    3695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 09:42:00.009186    3695 command_runner.go:130] > Certificate will not expire
	I0610 09:42:00.009435    3695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 09:42:00.012771    3695 command_runner.go:130] > Certificate will not expire
	I0610 09:42:00.012983    3695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 09:42:00.016403    3695 command_runner.go:130] > Certificate will not expire
	I0610 09:42:00.016641    3695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 09:42:00.020036    3695 command_runner.go:130] > Certificate will not expire
	I0610 09:42:00.020365    3695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 09:42:00.023734    3695 command_runner.go:130] > Certificate will not expire
	I0610 09:42:00.023987    3695 kubeadm.go:404] StartCluster: {Name:multinode-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.27.2 ClusterName:multinode-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:42:00.024078    3695 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:42:00.036667    3695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 09:42:00.042882    3695 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0610 09:42:00.042890    3695 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0610 09:42:00.042895    3695 command_runner.go:130] > /var/lib/minikube/etcd:
	I0610 09:42:00.042918    3695 command_runner.go:130] > member
	I0610 09:42:00.043001    3695 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0610 09:42:00.043013    3695 kubeadm.go:636] restartCluster start
	I0610 09:42:00.043057    3695 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 09:42:00.049234    3695 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:00.049519    3695 kubeconfig.go:135] verify returned: extract IP: "multinode-826000" does not appear in /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:42:00.049595    3695 kubeconfig.go:146] "multinode-826000" context is missing from /Users/jenkins/minikube-integration/16578-1235/kubeconfig - will repair!
	I0610 09:42:00.050631    3695 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/kubeconfig: {Name:mk52bc17fccce955e53da0cb42ca8ae2dd34c214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:42:00.051397    3695 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:42:00.051577    3695 kapi.go:59] client config for multinode-826000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.key", CAFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x257f980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 09:42:00.052063    3695 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 09:42:00.052238    3695 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 09:42:00.058416    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:00.058460    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:00.066560    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:00.582463    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:00.582634    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:00.593280    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:01.081049    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:01.081165    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:01.091552    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:01.581735    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:01.581900    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:01.593176    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:02.082299    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:02.082421    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:02.092906    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:02.582484    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:02.582642    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:02.594002    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:03.082545    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:03.082682    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:03.093846    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:03.580672    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:03.580784    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:03.591941    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:04.081711    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:04.081847    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:04.092270    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:04.580593    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:04.580775    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:04.591612    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:05.082436    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:05.082535    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:05.092658    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:05.582455    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:05.582623    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:05.593603    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:06.082471    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:06.082624    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:06.093196    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:06.582450    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:06.582598    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:06.592887    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:07.081679    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:07.081845    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:07.092493    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:07.581418    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:07.581600    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:07.593189    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:08.081225    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:08.081381    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:08.090856    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:08.581458    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:08.581641    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:08.591349    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:09.081284    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:09.081423    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:09.091507    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:09.580895    3695 api_server.go:166] Checking apiserver status ...
	I0610 09:42:09.581060    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0610 09:42:09.591770    3695 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0610 09:42:10.059621    3695 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0610 09:42:10.081130    3695 kubeadm.go:1123] stopping kube-system containers ...
	I0610 09:42:10.081269    3695 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:42:10.098371    3695 command_runner.go:130] > 12619bc2bf57
	I0610 09:42:10.098382    3695 command_runner.go:130] > e628a3dfc251
	I0610 09:42:10.098386    3695 command_runner.go:130] > b080919c1ecf
	I0610 09:42:10.098389    3695 command_runner.go:130] > 2494e4985fe3
	I0610 09:42:10.098393    3695 command_runner.go:130] > dcf36c339d8e
	I0610 09:42:10.098408    3695 command_runner.go:130] > 3246cc4a932c
	I0610 09:42:10.098416    3695 command_runner.go:130] > f4c3162aaa5c
	I0610 09:42:10.098420    3695 command_runner.go:130] > fe54448abb1a
	I0610 09:42:10.098423    3695 command_runner.go:130] > ba32349cda75
	I0610 09:42:10.098427    3695 command_runner.go:130] > c0054420e3b8
	I0610 09:42:10.098431    3695 command_runner.go:130] > ae72b9818103
	I0610 09:42:10.098445    3695 command_runner.go:130] > 0a2f2c979d7b
	I0610 09:42:10.098450    3695 command_runner.go:130] > 2023590fd394
	I0610 09:42:10.098466    3695 command_runner.go:130] > 2d94b625d191
	I0610 09:42:10.098473    3695 command_runner.go:130] > 1e876d1d39ca
	I0610 09:42:10.098484    3695 command_runner.go:130] > 8f3a0f3eaddd
	I0610 09:42:10.098982    3695 docker.go:459] Stopping containers: [12619bc2bf57 e628a3dfc251 b080919c1ecf 2494e4985fe3 dcf36c339d8e 3246cc4a932c f4c3162aaa5c fe54448abb1a ba32349cda75 c0054420e3b8 ae72b9818103 0a2f2c979d7b 2023590fd394 2d94b625d191 1e876d1d39ca 8f3a0f3eaddd]
	I0610 09:42:10.099052    3695 ssh_runner.go:195] Run: docker stop 12619bc2bf57 e628a3dfc251 b080919c1ecf 2494e4985fe3 dcf36c339d8e 3246cc4a932c f4c3162aaa5c fe54448abb1a ba32349cda75 c0054420e3b8 ae72b9818103 0a2f2c979d7b 2023590fd394 2d94b625d191 1e876d1d39ca 8f3a0f3eaddd
	I0610 09:42:10.111827    3695 command_runner.go:130] > 12619bc2bf57
	I0610 09:42:10.112198    3695 command_runner.go:130] > e628a3dfc251
	I0610 09:42:10.112206    3695 command_runner.go:130] > b080919c1ecf
	I0610 09:42:10.112209    3695 command_runner.go:130] > 2494e4985fe3
	I0610 09:42:10.112213    3695 command_runner.go:130] > dcf36c339d8e
	I0610 09:42:10.112216    3695 command_runner.go:130] > 3246cc4a932c
	I0610 09:42:10.112219    3695 command_runner.go:130] > f4c3162aaa5c
	I0610 09:42:10.112223    3695 command_runner.go:130] > fe54448abb1a
	I0610 09:42:10.112227    3695 command_runner.go:130] > ba32349cda75
	I0610 09:42:10.112232    3695 command_runner.go:130] > c0054420e3b8
	I0610 09:42:10.112236    3695 command_runner.go:130] > ae72b9818103
	I0610 09:42:10.112239    3695 command_runner.go:130] > 0a2f2c979d7b
	I0610 09:42:10.112242    3695 command_runner.go:130] > 2023590fd394
	I0610 09:42:10.112245    3695 command_runner.go:130] > 2d94b625d191
	I0610 09:42:10.112250    3695 command_runner.go:130] > 1e876d1d39ca
	I0610 09:42:10.112253    3695 command_runner.go:130] > 8f3a0f3eaddd
	I0610 09:42:10.112886    3695 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 09:42:10.124162    3695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 09:42:10.131575    3695 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 09:42:10.131585    3695 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 09:42:10.131590    3695 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 09:42:10.131596    3695 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:42:10.131612    3695 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:42:10.131650    3695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 09:42:10.138135    3695 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0610 09:42:10.138148    3695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 09:42:10.197087    3695 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:42:10.197355    3695 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 09:42:10.197704    3695 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 09:42:10.198037    3695 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 09:42:10.198460    3695 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0610 09:42:10.198923    3695 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0610 09:42:10.199330    3695 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0610 09:42:10.199711    3695 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0610 09:42:10.200138    3695 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0610 09:42:10.200459    3695 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 09:42:10.200858    3695 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 09:42:10.201281    3695 command_runner.go:130] > [certs] Using the existing "sa" key
	I0610 09:42:10.202102    3695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 09:42:10.238833    3695 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:42:10.599475    3695 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:42:10.822872    3695 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:42:10.942766    3695 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:42:11.056324    3695 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:42:11.058321    3695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 09:42:11.106630    3695 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:42:11.107281    3695 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:42:11.107490    3695 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 09:42:11.199677    3695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 09:42:11.249844    3695 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:42:11.249856    3695 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:42:11.254083    3695 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:42:11.256866    3695 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:42:11.258304    3695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 09:42:11.309775    3695 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:42:11.312463    3695 command_runner.go:130] ! W0610 16:42:11.337366    1277 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:42:11.312486    3695 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:42:11.312543    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:42:11.822997    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:42:12.323267    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:42:12.823273    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:42:13.323813    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:42:13.333040    3695 command_runner.go:130] > 1614
	I0610 09:42:13.333166    3695 api_server.go:72] duration metric: took 2.020689528s to wait for apiserver process to appear ...
	I0610 09:42:13.333176    3695 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:42:13.333189    3695 api_server.go:253] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0610 09:42:16.381848    3695 api_server.go:279] https://192.168.64.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 09:42:16.381864    3695 api_server.go:103] status: https://192.168.64.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 09:42:16.883445    3695 api_server.go:253] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0610 09:42:16.888834    3695 api_server.go:279] https://192.168.64.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0610 09:42:16.888848    3695 api_server.go:103] status: https://192.168.64.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0610 09:42:17.382110    3695 api_server.go:253] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0610 09:42:17.389346    3695 api_server.go:279] https://192.168.64.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0610 09:42:17.389359    3695 api_server.go:103] status: https://192.168.64.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0610 09:42:17.883051    3695 api_server.go:253] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0610 09:42:17.887976    3695 api_server.go:279] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0610 09:42:17.888031    3695 round_trippers.go:463] GET https://192.168.64.12:8443/version
	I0610 09:42:17.888036    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:17.888044    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:17.888050    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:17.896428    3695 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 09:42:17.896440    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:17.896452    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:17.896458    3695 round_trippers.go:580]     Content-Length: 263
	I0610 09:42:17.896465    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:17 GMT
	I0610 09:42:17.896469    3695 round_trippers.go:580]     Audit-Id: 3b9f768a-e7f1-4b15-a819-b277fbb4889e
	I0610 09:42:17.896474    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:17.896479    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:17.896484    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:17.896503    3695 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.2",
	  "gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
	  "gitTreeState": "clean",
	  "buildDate": "2023-05-17T14:13:28Z",
	  "goVersion": "go1.20.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 09:42:17.896555    3695 api_server.go:141] control plane version: v1.27.2
	I0610 09:42:17.896564    3695 api_server.go:131] duration metric: took 4.56340023s to wait for apiserver health ...
	I0610 09:42:17.896572    3695 cni.go:84] Creating CNI manager for ""
	I0610 09:42:17.896578    3695 cni.go:136] 1 nodes found, recommending kindnet
	I0610 09:42:17.920748    3695 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 09:42:17.940521    3695 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 09:42:17.949828    3695 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 09:42:17.949851    3695 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0610 09:42:17.949859    3695 command_runner.go:130] > Device: 11h/17d	Inode: 3541        Links: 1
	I0610 09:42:17.949867    3695 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 09:42:17.949881    3695 command_runner.go:130] > Access: 2023-06-10 16:41:28.801178021 +0000
	I0610 09:42:17.949887    3695 command_runner.go:130] > Modify: 2023-06-07 05:33:21.000000000 +0000
	I0610 09:42:17.949891    3695 command_runner.go:130] > Change: 2023-06-10 16:41:27.538177940 +0000
	I0610 09:42:17.949899    3695 command_runner.go:130] >  Birth: -
	I0610 09:42:17.949961    3695 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0610 09:42:17.949970    3695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 09:42:17.966093    3695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 09:42:19.101184    3695 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0610 09:42:19.104769    3695 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0610 09:42:19.105858    3695 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0610 09:42:19.116219    3695 command_runner.go:130] > daemonset.apps/kindnet configured
	I0610 09:42:19.117935    3695 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.151830478s)
	I0610 09:42:19.117958    3695 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:42:19.118010    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:42:19.118015    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.118025    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.118032    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.120603    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:19.120612    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.120618    3695 round_trippers.go:580]     Audit-Id: 81dfb173-c286-467e-a52f-4bfe87dfa295
	I0610 09:42:19.120626    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.120635    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.120655    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.120665    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.120671    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.121243    3695 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"479"},"items":[{"metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57165 chars]
	I0610 09:42:19.123550    3695 system_pods.go:59] 8 kube-system pods found
	I0610 09:42:19.123566    3695 system_pods.go:61] "coredns-5d78c9869d-r9sjl" [d3e6fbc7-ad9e-47a1-8592-9a22062f0845] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 09:42:19.123573    3695 system_pods.go:61] "etcd-multinode-826000" [9b124acd-926c-431e-bc35-6b845e46eefa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 09:42:19.123578    3695 system_pods.go:61] "kindnet-9r8df" [39c3c671-53e3-4745-ad44-d4d88bac2e7b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0610 09:42:19.123583    3695 system_pods.go:61] "kube-apiserver-multinode-826000" [f3b403ee-f6c6-47cb-baf3-3c15231b7625] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 09:42:19.123589    3695 system_pods.go:61] "kube-controller-manager-multinode-826000" [bc079029-af76-412a-b16a-e3bd76a3354a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 09:42:19.123593    3695 system_pods.go:61] "kube-proxy-7dxj9" [52c8c8ff-4db3-4df4-9a64-dfa1f0221f20] Running
	I0610 09:42:19.123599    3695 system_pods.go:61] "kube-scheduler-multinode-826000" [49d5bdcb-168b-4719-917a-80bd9859ccb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 09:42:19.123604    3695 system_pods.go:61] "storage-provisioner" [045816f3-b7b8-4909-8dc7-42d6d795adb1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0610 09:42:19.123608    3695 system_pods.go:74] duration metric: took 5.645756ms to wait for pod list to return data ...
	I0610 09:42:19.123614    3695 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:42:19.123643    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
	I0610 09:42:19.123648    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.123664    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.123671    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.125559    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:19.125567    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.125572    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.125577    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.125583    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.125588    3695 round_trippers.go:580]     Audit-Id: 496a5031-2609-4327-86e2-52c7270a7825
	I0610 09:42:19.125595    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.125599    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.125705    3695 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"479"},"items":[{"metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5184 chars]
	I0610 09:42:19.126036    3695 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0610 09:42:19.126051    3695 node_conditions.go:123] node cpu capacity is 2
	I0610 09:42:19.126062    3695 node_conditions.go:105] duration metric: took 2.444544ms to run NodePressure ...
	I0610 09:42:19.126073    3695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 09:42:19.218532    3695 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 09:42:19.249202    3695 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 09:42:19.250232    3695 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0610 09:42:19.250289    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0610 09:42:19.250296    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.250306    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.250312    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.252754    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:19.252762    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.252767    3695 round_trippers.go:580]     Audit-Id: 9dc58a55-0ec1-4067-8245-1eba4563934e
	I0610 09:42:19.252772    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.252779    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.252786    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.252790    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.252796    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.253206    3695 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"481"},"items":[{"metadata":{"name":"etcd-multinode-826000","namespace":"kube-system","uid":"9b124acd-926c-431e-bc35-6b845e46eefa","resourceVersion":"414","creationTimestamp":"2023-06-10T16:40:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"4257ff4fa7ee28e8b93d5e2345c387ba","kubernetes.io/config.mirror":"4257ff4fa7ee28e8b93d5e2345c387ba","kubernetes.io/config.seen":"2023-06-10T16:40:35.743576396Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 29763 chars]
	I0610 09:42:19.253913    3695 kubeadm.go:787] kubelet initialised
	I0610 09:42:19.253922    3695 kubeadm.go:788] duration metric: took 3.680321ms waiting for restarted kubelet to initialise ...
	I0610 09:42:19.253928    3695 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:42:19.253956    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:42:19.253961    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.253978    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.253986    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.255889    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:19.255898    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.255907    3695 round_trippers.go:580]     Audit-Id: 48fef419-ba73-4772-9ccd-8c61ba58eca6
	I0610 09:42:19.255915    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.255922    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.255927    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.255932    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.255937    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.256604    3695 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"481"},"items":[{"metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57165 chars]
	I0610 09:42:19.257860    3695 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:19.257896    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:19.257901    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.257907    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.257913    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.259163    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:19.259175    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.259184    3695 round_trippers.go:580]     Audit-Id: bda768f5-4424-43e1-8993-cc24462ce41a
	I0610 09:42:19.259202    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.259213    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.259218    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.259225    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.259230    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.259331    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:19.259573    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:19.259580    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.259586    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.259591    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.260708    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:19.260717    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.260722    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.260727    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.260733    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.260738    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.260743    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.260747    3695 round_trippers.go:580]     Audit-Id: 0aa8187d-e3d7-4613-aa2c-a44e43e00da4
	I0610 09:42:19.260849    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:19.261047    3695 pod_ready.go:97] node "multinode-826000" hosting pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:19.261056    3695 pod_ready.go:81] duration metric: took 3.186415ms waiting for pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace to be "Ready" ...
	E0610 09:42:19.261060    3695 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-826000" hosting pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:19.261068    3695 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:19.261095    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-826000
	I0610 09:42:19.261100    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.261105    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.261111    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.262194    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:19.262202    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.262206    3695 round_trippers.go:580]     Audit-Id: bffb5122-a37e-4511-bae3-b523c5313299
	I0610 09:42:19.262211    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.262216    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.262221    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.262226    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.262230    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.262329    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-826000","namespace":"kube-system","uid":"9b124acd-926c-431e-bc35-6b845e46eefa","resourceVersion":"414","creationTimestamp":"2023-06-10T16:40:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"4257ff4fa7ee28e8b93d5e2345c387ba","kubernetes.io/config.mirror":"4257ff4fa7ee28e8b93d5e2345c387ba","kubernetes.io/config.seen":"2023-06-10T16:40:35.743576396Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0610 09:42:19.262538    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:19.262545    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.262551    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.262557    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.263661    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:19.263672    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.263680    3695 round_trippers.go:580]     Audit-Id: 6c7ce9ff-3623-4565-8702-f208e165d6f7
	I0610 09:42:19.263688    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.263693    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.263701    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.263706    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.263712    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.263800    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:19.263963    3695 pod_ready.go:97] node "multinode-826000" hosting pod "etcd-multinode-826000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:19.263972    3695 pod_ready.go:81] duration metric: took 2.89955ms waiting for pod "etcd-multinode-826000" in "kube-system" namespace to be "Ready" ...
	E0610 09:42:19.263977    3695 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-826000" hosting pod "etcd-multinode-826000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:19.263985    3695 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:19.264009    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-826000
	I0610 09:42:19.264014    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.264019    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.264025    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.265160    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:19.265167    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.265172    3695 round_trippers.go:580]     Audit-Id: 38b89550-1c96-4e41-920d-f507c710b12e
	I0610 09:42:19.265177    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.265181    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.265186    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.265191    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.265196    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.265312    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-826000","namespace":"kube-system","uid":"f3b403ee-f6c6-47cb-baf3-3c15231b7625","resourceVersion":"418","creationTimestamp":"2023-06-10T16:40:40Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"376ee319583f65c2f2f990eb64ecbee8","kubernetes.io/config.mirror":"376ee319583f65c2f2f990eb64ecbee8","kubernetes.io/config.seen":"2023-06-10T16:40:35.743576953Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7852 chars]
	I0610 09:42:19.265543    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:19.265549    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.265555    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.265561    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.266927    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:19.266934    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.266939    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.266953    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.266964    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.266970    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.266975    3695 round_trippers.go:580]     Audit-Id: 83ccdca4-0781-4be4-bd58-4fbd1bde3cec
	I0610 09:42:19.266981    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.267118    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:19.267280    3695 pod_ready.go:97] node "multinode-826000" hosting pod "kube-apiserver-multinode-826000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:19.267288    3695 pod_ready.go:81] duration metric: took 3.298259ms waiting for pod "kube-apiserver-multinode-826000" in "kube-system" namespace to be "Ready" ...
	E0610 09:42:19.267293    3695 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-826000" hosting pod "kube-apiserver-multinode-826000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:19.267299    3695 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:19.320120    3695 request.go:628] Waited for 52.762615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-826000
	I0610 09:42:19.320196    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-826000
	I0610 09:42:19.320205    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.320248    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.320261    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.323782    3695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:42:19.323801    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.323809    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.323815    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.323823    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.323829    3695 round_trippers.go:580]     Audit-Id: fa5bf487-361f-430a-a4e0-27b946ffe18e
	I0610 09:42:19.323837    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.323845    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.324025    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-826000","namespace":"kube-system","uid":"bc079029-af76-412a-b16a-e3bd76a3354a","resourceVersion":"419","creationTimestamp":"2023-06-10T16:40:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dadf7af017919599a45f7ef25c850049","kubernetes.io/config.mirror":"dadf7af017919599a45f7ef25c850049","kubernetes.io/config.seen":"2023-06-10T16:40:35.743573226Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7436 chars]
	I0610 09:42:19.518285    3695 request.go:628] Waited for 193.891201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:19.518347    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:19.518372    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.518378    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.518384    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.520239    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:19.520262    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.520278    3695 round_trippers.go:580]     Audit-Id: a27df425-5c0a-4989-8834-bf25ea0d5967
	I0610 09:42:19.520285    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.520290    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.520295    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.520302    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.520306    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.520384    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:19.520592    3695 pod_ready.go:97] node "multinode-826000" hosting pod "kube-controller-manager-multinode-826000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:19.520601    3695 pod_ready.go:81] duration metric: took 253.298097ms waiting for pod "kube-controller-manager-multinode-826000" in "kube-system" namespace to be "Ready" ...
	E0610 09:42:19.520610    3695 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-826000" hosting pod "kube-controller-manager-multinode-826000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:19.520616    3695 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7dxj9" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:19.718569    3695 request.go:628] Waited for 197.91279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7dxj9
	I0610 09:42:19.718654    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7dxj9
	I0610 09:42:19.718664    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.718675    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.718686    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.722142    3695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:42:19.722160    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.722168    3695 round_trippers.go:580]     Audit-Id: 7dba43f8-66c9-4b7a-ba43-6ffdd9ac352c
	I0610 09:42:19.722175    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.722182    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.722189    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.722195    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.722203    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.722352    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7dxj9","generateName":"kube-proxy-","namespace":"kube-system","uid":"52c8c8ff-4db3-4df4-9a64-dfa1f0221f20","resourceVersion":"477","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a54e86e6-ea1b-4f1a-a115-3032051cb5cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a54e86e6-ea1b-4f1a-a115-3032051cb5cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5736 chars]
	I0610 09:42:19.918251    3695 request.go:628] Waited for 195.534775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:19.918325    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:19.918336    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:19.918348    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:19.918361    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:19.921664    3695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:42:19.921683    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:19.921695    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:19.921712    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:19.921722    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:19 GMT
	I0610 09:42:19.921739    3695 round_trippers.go:580]     Audit-Id: 8ebfa033-8201-4dc3-927a-596991026f37
	I0610 09:42:19.921751    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:19.921762    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:19.921933    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:19.922198    3695 pod_ready.go:97] node "multinode-826000" hosting pod "kube-proxy-7dxj9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:19.922211    3695 pod_ready.go:81] duration metric: took 401.590811ms waiting for pod "kube-proxy-7dxj9" in "kube-system" namespace to be "Ready" ...
	E0610 09:42:19.922218    3695 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-826000" hosting pod "kube-proxy-7dxj9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:19.922228    3695 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:20.119050    3695 request.go:628] Waited for 196.759508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-826000
	I0610 09:42:20.119097    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-826000
	I0610 09:42:20.119105    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:20.119116    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:20.119129    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:20.122343    3695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:42:20.122360    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:20.122368    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:20.122375    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:20.122381    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:20.122389    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:20 GMT
	I0610 09:42:20.122396    3695 round_trippers.go:580]     Audit-Id: 7f2e16ba-93a2-491e-ba15-cadf3c8e1e03
	I0610 09:42:20.122402    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:20.122721    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-826000","namespace":"kube-system","uid":"49d5bdcb-168b-4719-917a-80bd9859ccb6","resourceVersion":"420","creationTimestamp":"2023-06-10T16:40:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"07dc3f9536175f6e9e243e6c2d78c2e4","kubernetes.io/config.mirror":"07dc3f9536175f6e9e243e6c2d78c2e4","kubernetes.io/config.seen":"2023-06-10T16:40:42.865304771Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0610 09:42:20.318051    3695 request.go:628] Waited for 195.033695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:20.318133    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:20.318143    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:20.318155    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:20.318166    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:20.320952    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:20.320969    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:20.320977    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:20.320995    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:20 GMT
	I0610 09:42:20.321002    3695 round_trippers.go:580]     Audit-Id: dc3f42a8-acac-42c5-adca-5019de48547e
	I0610 09:42:20.321008    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:20.321016    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:20.321022    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:20.321105    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:20.321363    3695 pod_ready.go:97] node "multinode-826000" hosting pod "kube-scheduler-multinode-826000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:20.321375    3695 pod_ready.go:81] duration metric: took 399.142863ms waiting for pod "kube-scheduler-multinode-826000" in "kube-system" namespace to be "Ready" ...
	E0610 09:42:20.321382    3695 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-826000" hosting pod "kube-scheduler-multinode-826000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-826000" has status "Ready":"False"
	I0610 09:42:20.321399    3695 pod_ready.go:38] duration metric: took 1.067466913s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:42:20.321415    3695 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 09:42:20.328946    3695 command_runner.go:130] > -16
	I0610 09:42:20.329269    3695 ops.go:34] apiserver oom_adj: -16
	I0610 09:42:20.329277    3695 kubeadm.go:640] restartCluster took 20.286331002s
	I0610 09:42:20.329283    3695 kubeadm.go:406] StartCluster complete in 20.305375855s
	I0610 09:42:20.329292    3695 settings.go:142] acquiring lock: {Name:mkb9b6482d5ac8949a51ff4918d4bb9ad74e8d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:42:20.329369    3695 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:42:20.329742    3695 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/kubeconfig: {Name:mk52bc17fccce955e53da0cb42ca8ae2dd34c214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:42:20.329988    3695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 09:42:20.330021    3695 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 09:42:20.330073    3695 addons.go:66] Setting storage-provisioner=true in profile "multinode-826000"
	I0610 09:42:20.330086    3695 addons.go:228] Setting addon storage-provisioner=true in "multinode-826000"
	W0610 09:42:20.330090    3695 addons.go:237] addon storage-provisioner should already be in state true
	I0610 09:42:20.330094    3695 addons.go:66] Setting default-storageclass=true in profile "multinode-826000"
	I0610 09:42:20.330122    3695 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-826000"
	I0610 09:42:20.330124    3695 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:42:20.330139    3695 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:42:20.330359    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:20.330378    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:20.330390    3695 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:42:20.330407    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:20.330427    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:20.331231    3695 kapi.go:59] client config for multinode-826000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.key", CAFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x257f980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 09:42:20.333629    3695 round_trippers.go:463] GET https://192.168.64.12:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 09:42:20.333639    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:20.333646    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:20.333652    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:20.335523    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:20.335534    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:20.335539    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:20.335546    3695 round_trippers.go:580]     Content-Length: 291
	I0610 09:42:20.335553    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:20 GMT
	I0610 09:42:20.335559    3695 round_trippers.go:580]     Audit-Id: 54762d1f-81c8-4277-8adb-595a8524cada
	I0610 09:42:20.335564    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:20.335569    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:20.335574    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:20.335589    3695 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"625a723b-e519-4e66-a2da-66daece80ce5","resourceVersion":"480","creationTimestamp":"2023-06-10T16:40:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0610 09:42:20.335713    3695 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-826000" context rescaled to 1 replicas
	I0610 09:42:20.335748    3695 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.12 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:42:20.337875    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51238
	I0610 09:42:20.357227    3695 out.go:177] * Verifying Kubernetes components...
	I0610 09:42:20.338268    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51239
	I0610 09:42:20.357667    3695 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:20.399209    3695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:42:20.399550    3695 main.go:141] libmachine: Using API Version  1
	I0610 09:42:20.399565    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:20.399645    3695 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:20.399804    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:20.400035    3695 main.go:141] libmachine: Using API Version  1
	I0610 09:42:20.400047    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:20.400200    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:20.400226    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:20.400265    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:20.400368    3695 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:42:20.401114    3695 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:42:20.401162    3695 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3708
	I0610 09:42:20.402972    3695 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:42:20.403162    3695 kapi.go:59] client config for multinode-826000: &rest.Config{Host:"https://192.168.64.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000/client.key", CAFile:"/Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x257f980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 09:42:20.403429    3695 round_trippers.go:463] GET https://192.168.64.12:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 09:42:20.403436    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:20.403443    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:20.403449    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:20.405327    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:20.405344    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:20.405349    3695 round_trippers.go:580]     Audit-Id: 6740f24e-f3c9-4135-b982-0023b48e610c
	I0610 09:42:20.405355    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:20.405360    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:20.405365    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:20.405370    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:20.405375    3695 round_trippers.go:580]     Content-Length: 1273
	I0610 09:42:20.405381    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:20 GMT
	I0610 09:42:20.405413    3695 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"481"},"items":[{"metadata":{"name":"standard","uid":"2f64806c-7924-44a5-b6f3-9da25571ed16","resourceVersion":"357","creationTimestamp":"2023-06-10T16:40:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-06-10T16:40:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0610 09:42:20.405792    3695 request.go:1188] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2f64806c-7924-44a5-b6f3-9da25571ed16","resourceVersion":"357","creationTimestamp":"2023-06-10T16:40:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-06-10T16:40:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 09:42:20.405828    3695 round_trippers.go:463] PUT https://192.168.64.12:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0610 09:42:20.405835    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:20.405841    3695 round_trippers.go:473]     Content-Type: application/json
	I0610 09:42:20.405846    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:20.405853    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:20.407536    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51242
	I0610 09:42:20.407845    3695 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:20.407988    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:20.407997    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:20.408002    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:20.408007    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:20.408014    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:20.408018    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:20.408023    3695 round_trippers.go:580]     Content-Length: 1220
	I0610 09:42:20.408029    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:20 GMT
	I0610 09:42:20.408034    3695 round_trippers.go:580]     Audit-Id: 94638ef1-56e9-4f7d-8f36-28a003708fa3
	I0610 09:42:20.408079    3695 request.go:1188] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2f64806c-7924-44a5-b6f3-9da25571ed16","resourceVersion":"357","creationTimestamp":"2023-06-10T16:40:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-06-10T16:40:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 09:42:20.408147    3695 addons.go:228] Setting addon default-storageclass=true in "multinode-826000"
	W0610 09:42:20.408158    3695 addons.go:237] addon default-storageclass should already be in state true
	I0610 09:42:20.408177    3695 host.go:66] Checking if "multinode-826000" exists ...
	I0610 09:42:20.408181    3695 main.go:141] libmachine: Using API Version  1
	I0610 09:42:20.408191    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:20.408404    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:20.408423    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:20.408447    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:20.408499    3695 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:42:20.408579    3695 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:42:20.409088    3695 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3708
	I0610 09:42:20.410423    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:42:20.415708    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51244
	I0610 09:42:20.448100    3695 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:42:20.448841    3695 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:20.455233    3695 command_runner.go:130] > apiVersion: v1
	I0610 09:42:20.469312    3695 command_runner.go:130] > data:
	I0610 09:42:20.469328    3695 command_runner.go:130] >   Corefile: |
	I0610 09:42:20.469350    3695 command_runner.go:130] >     .:53 {
	I0610 09:42:20.469362    3695 command_runner.go:130] >         log
	I0610 09:42:20.469374    3695 command_runner.go:130] >         errors
	I0610 09:42:20.469381    3695 command_runner.go:130] >         health {
	I0610 09:42:20.469395    3695 command_runner.go:130] >            lameduck 5s
	I0610 09:42:20.469404    3695 command_runner.go:130] >         }
	I0610 09:42:20.469415    3695 command_runner.go:130] >         ready
	I0610 09:42:20.469425    3695 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0610 09:42:20.469435    3695 command_runner.go:130] >            pods insecure
	I0610 09:42:20.469449    3695 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0610 09:42:20.469452    3695 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:42:20.469458    3695 command_runner.go:130] >            ttl 30
	I0610 09:42:20.469472    3695 command_runner.go:130] >         }
	I0610 09:42:20.469472    3695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 09:42:20.469480    3695 command_runner.go:130] >         prometheus :9153
	I0610 09:42:20.469488    3695 command_runner.go:130] >         hosts {
	I0610 09:42:20.469497    3695 command_runner.go:130] >            192.168.64.1 host.minikube.internal
	I0610 09:42:20.469507    3695 command_runner.go:130] >            fallthrough
	I0610 09:42:20.469514    3695 command_runner.go:130] >         }
	I0610 09:42:20.469516    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:42:20.469524    3695 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0610 09:42:20.469533    3695 command_runner.go:130] >            max_concurrent 1000
	I0610 09:42:20.469541    3695 command_runner.go:130] >         }
	I0610 09:42:20.469548    3695 command_runner.go:130] >         cache 30
	I0610 09:42:20.469564    3695 command_runner.go:130] >         loop
	I0610 09:42:20.469584    3695 command_runner.go:130] >         reload
	I0610 09:42:20.469598    3695 command_runner.go:130] >         loadbalance
	I0610 09:42:20.469606    3695 command_runner.go:130] >     }
	I0610 09:42:20.469616    3695 command_runner.go:130] > kind: ConfigMap
	I0610 09:42:20.469625    3695 command_runner.go:130] > metadata:
	I0610 09:42:20.469640    3695 command_runner.go:130] >   creationTimestamp: "2023-06-10T16:40:42Z"
	I0610 09:42:20.469648    3695 command_runner.go:130] >   name: coredns
	I0610 09:42:20.469691    3695 command_runner.go:130] >   namespace: kube-system
	I0610 09:42:20.469705    3695 command_runner.go:130] >   resourceVersion: "356"
	I0610 09:42:20.469717    3695 command_runner.go:130] >   uid: 7a9d9c8f-b950-4b01-817d-0d3621e11e25
	I0610 09:42:20.469869    3695 node_ready.go:35] waiting up to 6m0s for node "multinode-826000" to be "Ready" ...
	I0610 09:42:20.469904    3695 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0610 09:42:20.469891    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:42:20.470312    3695 main.go:141] libmachine: Using API Version  1
	I0610 09:42:20.470335    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:20.470363    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:42:20.470593    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:42:20.470755    3695 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:42:20.470766    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:20.471281    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:42:20.471314    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:42:20.479101    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51247
	I0610 09:42:20.479485    3695 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:42:20.479865    3695 main.go:141] libmachine: Using API Version  1
	I0610 09:42:20.479888    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:42:20.480100    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:42:20.480222    3695 main.go:141] libmachine: (multinode-826000) Calling .GetState
	I0610 09:42:20.480312    3695 main.go:141] libmachine: (multinode-826000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:42:20.480391    3695 main.go:141] libmachine: (multinode-826000) DBG | hyperkit pid from json: 3708
	I0610 09:42:20.481315    3695 main.go:141] libmachine: (multinode-826000) Calling .DriverName
	I0610 09:42:20.481483    3695 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 09:42:20.481492    3695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 09:42:20.481502    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHHostname
	I0610 09:42:20.481571    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHPort
	I0610 09:42:20.481662    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHKeyPath
	I0610 09:42:20.481738    3695 main.go:141] libmachine: (multinode-826000) Calling .GetSSHUsername
	I0610 09:42:20.481815    3695 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000/id_rsa Username:docker}
	I0610 09:42:20.518205    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:20.518217    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:20.518223    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:20.518229    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:20.520189    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:20.520200    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:20.520206    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:20 GMT
	I0610 09:42:20.520211    3695 round_trippers.go:580]     Audit-Id: 90d81e1c-e655-4ed4-a7cc-d3aaa386d44c
	I0610 09:42:20.520224    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:20.520230    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:20.520237    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:20.520242    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:20.520392    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:20.550413    3695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 09:42:20.555044    3695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:42:20.887984    3695 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0610 09:42:20.889752    3695 main.go:141] libmachine: Making call to close driver server
	I0610 09:42:20.889762    3695 main.go:141] libmachine: (multinode-826000) Calling .Close
	I0610 09:42:20.889941    3695 main.go:141] libmachine: (multinode-826000) DBG | Closing plugin on server side
	I0610 09:42:20.889970    3695 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:42:20.889978    3695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:42:20.889990    3695 main.go:141] libmachine: Making call to close driver server
	I0610 09:42:20.889997    3695 main.go:141] libmachine: (multinode-826000) Calling .Close
	I0610 09:42:20.890117    3695 main.go:141] libmachine: (multinode-826000) DBG | Closing plugin on server side
	I0610 09:42:20.890124    3695 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:42:20.890134    3695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:42:20.890146    3695 main.go:141] libmachine: Making call to close driver server
	I0610 09:42:20.890152    3695 main.go:141] libmachine: (multinode-826000) Calling .Close
	I0610 09:42:20.890297    3695 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:42:20.890302    3695 main.go:141] libmachine: (multinode-826000) DBG | Closing plugin on server side
	I0610 09:42:20.890305    3695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:42:20.996025    3695 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0610 09:42:20.998631    3695 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0610 09:42:21.001247    3695 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0610 09:42:21.004702    3695 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0610 09:42:21.007302    3695 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0610 09:42:21.013809    3695 command_runner.go:130] > pod/storage-provisioner configured
	I0610 09:42:21.015547    3695 main.go:141] libmachine: Making call to close driver server
	I0610 09:42:21.015565    3695 main.go:141] libmachine: (multinode-826000) Calling .Close
	I0610 09:42:21.015730    3695 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:42:21.015739    3695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:42:21.015747    3695 main.go:141] libmachine: Making call to close driver server
	I0610 09:42:21.015757    3695 main.go:141] libmachine: (multinode-826000) Calling .Close
	I0610 09:42:21.015772    3695 main.go:141] libmachine: (multinode-826000) DBG | Closing plugin on server side
	I0610 09:42:21.015886    3695 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:42:21.015895    3695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:42:21.015901    3695 main.go:141] libmachine: (multinode-826000) DBG | Closing plugin on server side
	I0610 09:42:21.020887    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:21.038364    3695 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 09:42:21.038391    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:21.085106    3695 addons.go:499] enable addons completed in 755.085641ms: enabled=[default-storageclass storage-provisioner]
	I0610 09:42:21.038401    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:21.085178    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:21.087427    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:21.087445    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:21.087457    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:21.087465    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:21.087474    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:21.087480    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:21 GMT
	I0610 09:42:21.087487    3695 round_trippers.go:580]     Audit-Id: 4016f6c8-45e6-4a09-84db-a9537b2439ce
	I0610 09:42:21.087494    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:21.087753    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:21.522078    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:21.522092    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:21.522098    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:21.522103    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:21.523923    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:21.523941    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:21.523947    3695 round_trippers.go:580]     Audit-Id: b2ac1617-3617-48be-8e09-5a7197e8de52
	I0610 09:42:21.523953    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:21.523958    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:21.523963    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:21.523968    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:21.523973    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:21 GMT
	I0610 09:42:21.524031    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:22.021866    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:22.021885    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:22.021897    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:22.021907    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:22.024653    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:22.024668    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:22.024677    3695 round_trippers.go:580]     Audit-Id: e28b87c7-7b5b-4b85-aafb-74b97156d358
	I0610 09:42:22.024687    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:22.024696    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:22.024702    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:22.024709    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:22.024723    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:22 GMT
	I0610 09:42:22.024917    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:22.521895    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:22.521911    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:22.521918    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:22.521923    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:22.523676    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:22.523689    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:22.523695    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:22 GMT
	I0610 09:42:22.523700    3695 round_trippers.go:580]     Audit-Id: 92b565f1-5b67-4fab-a962-61b91ace14f4
	I0610 09:42:22.523707    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:22.523714    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:22.523721    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:22.523728    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:22.523808    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:22.524024    3695 node_ready.go:58] node "multinode-826000" has status "Ready":"False"
	I0610 09:42:23.020976    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:23.021002    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:23.021014    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:23.021025    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:23.023649    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:23.023664    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:23.023672    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:23.023678    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:23.023685    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:23 GMT
	I0610 09:42:23.023701    3695 round_trippers.go:580]     Audit-Id: 18f1e160-66a8-4a8c-ac0d-9ca0233ab93e
	I0610 09:42:23.023709    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:23.023724    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:23.023831    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:23.521798    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:23.521819    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:23.521832    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:23.521842    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:23.524648    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:23.524663    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:23.524679    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:23.524686    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:23.524695    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:23.524701    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:23 GMT
	I0610 09:42:23.524708    3695 round_trippers.go:580]     Audit-Id: 953f266d-de46-4d7d-a00e-c2dffe616ab0
	I0610 09:42:23.524714    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:23.524793    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:24.020832    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:24.020864    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:24.020873    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:24.020880    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:24.022985    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:24.022996    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:24.023003    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:24 GMT
	I0610 09:42:24.023030    3695 round_trippers.go:580]     Audit-Id: 9d940a5f-c284-44a5-a4ff-8ae74c17399f
	I0610 09:42:24.023041    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:24.023046    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:24.023056    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:24.023062    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:24.023163    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:24.521988    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:24.522012    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:24.522028    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:24.522039    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:24.524916    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:24.524936    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:24.524970    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:24.524982    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:24.525007    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:24 GMT
	I0610 09:42:24.525017    3695 round_trippers.go:580]     Audit-Id: 85775f42-12b7-43a6-82bf-b329320a636f
	I0610 09:42:24.525025    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:24.525031    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:24.525099    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:24.525344    3695 node_ready.go:58] node "multinode-826000" has status "Ready":"False"
	I0610 09:42:25.020774    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:25.020789    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:25.020795    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:25.020800    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:25.022366    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:25.022377    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:25.022384    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:25.022392    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:25.022398    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:25.022404    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:25 GMT
	I0610 09:42:25.022409    3695 round_trippers.go:580]     Audit-Id: bdfd72d1-9e57-48e7-88dc-7c2e08937050
	I0610 09:42:25.022414    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:25.022505    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:25.522155    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:25.522179    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:25.522192    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:25.522201    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:25.524237    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:25.524250    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:25.524261    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:25.524271    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:25.524278    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:25.524284    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:25 GMT
	I0610 09:42:25.524290    3695 round_trippers.go:580]     Audit-Id: c735ac4e-1b08-484b-b24e-65d2a3c459be
	I0610 09:42:25.524302    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:25.524470    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:26.021326    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:26.021388    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:26.021406    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:26.021419    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:26.024266    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:26.024281    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:26.024289    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:26 GMT
	I0610 09:42:26.024296    3695 round_trippers.go:580]     Audit-Id: a98ffc42-bcb1-48ae-853d-bac62c1f8083
	I0610 09:42:26.024302    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:26.024309    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:26.024316    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:26.024323    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:26.024419    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:26.521294    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:26.521344    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:26.521357    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:26.521368    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:26.523900    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:26.523916    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:26.523926    3695 round_trippers.go:580]     Audit-Id: 60ea20bd-f8ed-4a0a-8f7d-a1e7e06cbd41
	I0610 09:42:26.523935    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:26.523941    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:26.523952    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:26.523959    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:26.523966    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:26 GMT
	I0610 09:42:26.524109    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"412","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5131 chars]
	I0610 09:42:27.022320    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:27.022353    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:27.022366    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:27.022376    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:27.025333    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:27.025351    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:27.025359    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:27 GMT
	I0610 09:42:27.025366    3695 round_trippers.go:580]     Audit-Id: e680ed7e-998f-4021-8638-70cd20bd35d8
	I0610 09:42:27.025372    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:27.025380    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:27.025386    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:27.025393    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:27.025501    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:27.025754    3695 node_ready.go:49] node "multinode-826000" has status "Ready":"True"
	I0610 09:42:27.025765    3695 node_ready.go:38] duration metric: took 6.555872007s waiting for node "multinode-826000" to be "Ready" ...
	I0610 09:42:27.025771    3695 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:42:27.025824    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:42:27.025831    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:27.025839    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:27.025848    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:27.027900    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:27.027909    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:27.027915    3695 round_trippers.go:580]     Audit-Id: 689cfb0e-cd83-4043-bca4-eafcfbc69bdc
	I0610 09:42:27.027935    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:27.027948    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:27.027965    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:27.027974    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:27.027982    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:27 GMT
	I0610 09:42:27.028580    3695 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"501"},"items":[{"metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55791 chars]
	I0610 09:42:27.029865    3695 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:27.029908    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:27.029913    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:27.029919    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:27.029924    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:27.031332    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:27.031346    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:27.031355    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:27.031363    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:27.031378    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:27.031384    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:27.031389    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:27 GMT
	I0610 09:42:27.031394    3695 round_trippers.go:580]     Audit-Id: be4bb78d-a2e1-4798-b136-4d3a76deb978
	I0610 09:42:27.031495    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:27.031720    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:27.031726    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:27.031732    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:27.031738    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:27.032927    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:27.032934    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:27.032939    3695 round_trippers.go:580]     Audit-Id: 126dfae3-d246-4bb8-b530-607c33ba030e
	I0610 09:42:27.032948    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:27.032953    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:27.032960    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:27.032965    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:27.032970    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:27 GMT
	I0610 09:42:27.033065    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:27.533501    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:27.533528    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:27.533540    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:27.533550    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:27.536534    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:27.536550    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:27.536558    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:27.536564    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:27.536570    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:27.536577    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:27.536584    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:27 GMT
	I0610 09:42:27.536598    3695 round_trippers.go:580]     Audit-Id: 77060d79-b07c-42c5-b9a3-2212a36d5731
	I0610 09:42:27.536852    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:27.537219    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:27.537228    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:27.537236    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:27.537244    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:27.538732    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:27.538741    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:27.538746    3695 round_trippers.go:580]     Audit-Id: 3bd29971-4822-440f-9bf2-2c16294f9ff8
	I0610 09:42:27.538751    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:27.538755    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:27.538760    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:27.538770    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:27.538788    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:27 GMT
	I0610 09:42:27.538937    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:28.034261    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:28.034281    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:28.034293    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:28.034306    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:28.037001    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:28.037015    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:28.037022    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:28.037029    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:28.037036    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:28 GMT
	I0610 09:42:28.037045    3695 round_trippers.go:580]     Audit-Id: 9efa85ab-56e6-4980-b675-a0ab153d6c8e
	I0610 09:42:28.037056    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:28.037066    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:28.037406    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:28.037792    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:28.037802    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:28.037810    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:28.037817    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:28.039503    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:28.039513    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:28.039524    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:28.039530    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:28.039537    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:28.039542    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:28.039547    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:28 GMT
	I0610 09:42:28.039552    3695 round_trippers.go:580]     Audit-Id: 4f45c1df-8a1b-4757-ae23-48107300e8a6
	I0610 09:42:28.039644    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:28.534477    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:28.534503    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:28.534515    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:28.534525    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:28.537365    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:28.537379    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:28.537387    3695 round_trippers.go:580]     Audit-Id: cea62ce0-97b1-456c-859a-908680b4aacf
	I0610 09:42:28.537398    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:28.537405    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:28.537411    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:28.537419    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:28.537426    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:28 GMT
	I0610 09:42:28.537510    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:28.537884    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:28.537893    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:28.537900    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:28.537935    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:28.539493    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:28.539502    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:28.539510    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:28.539515    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:28.539521    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:28 GMT
	I0610 09:42:28.539531    3695 round_trippers.go:580]     Audit-Id: d2b0a03d-ec78-4bc2-bb45-03b214d83207
	I0610 09:42:28.539542    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:28.539548    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:28.539650    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:29.033402    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:29.033418    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:29.033429    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:29.033437    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:29.035527    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:29.035542    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:29.035552    3695 round_trippers.go:580]     Audit-Id: cdba5f1d-6182-4231-bdb4-66bc8ae63667
	I0610 09:42:29.035562    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:29.035570    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:29.035580    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:29.035589    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:29.035597    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:29 GMT
	I0610 09:42:29.035697    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:29.035982    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:29.035991    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:29.035998    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:29.036004    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:29.037621    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:29.037632    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:29.037640    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:29.037647    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:29.037653    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:29.037660    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:29.037668    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:29 GMT
	I0610 09:42:29.037703    3695 round_trippers.go:580]     Audit-Id: cc543a1e-d41b-47cf-9996-3bd3a0cbc8f6
	I0610 09:42:29.037792    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:29.037968    3695 pod_ready.go:102] pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace has status "Ready":"False"
	I0610 09:42:29.535914    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:29.535946    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:29.535967    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:29.535988    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:29.539597    3695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:42:29.539608    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:29.539614    3695 round_trippers.go:580]     Audit-Id: 71d1c854-6363-4d59-83f5-abfa118f52ef
	I0610 09:42:29.539620    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:29.539626    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:29.539631    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:29.539636    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:29.539641    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:29 GMT
	I0610 09:42:29.539707    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:29.539982    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:29.539988    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:29.539994    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:29.540001    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:29.541431    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:29.541442    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:29.541448    3695 round_trippers.go:580]     Audit-Id: 0e5edecd-aa67-4754-8252-a8cf9aabf802
	I0610 09:42:29.541453    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:29.541457    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:29.541462    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:29.541467    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:29.541472    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:29 GMT
	I0610 09:42:29.541563    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:30.033490    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:30.033509    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:30.033518    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:30.033525    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:30.036083    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:30.036110    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:30.036123    3695 round_trippers.go:580]     Audit-Id: 32134ec4-2df5-4078-bfe4-15b58bc199e0
	I0610 09:42:30.036132    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:30.036143    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:30.036151    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:30.036157    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:30.036167    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:30 GMT
	I0610 09:42:30.036273    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:30.036642    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:30.036648    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:30.036654    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:30.036660    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:30.037869    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:30.037884    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:30.037890    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:30.037897    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:30.037904    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:30.037913    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:30.037918    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:30 GMT
	I0610 09:42:30.037923    3695 round_trippers.go:580]     Audit-Id: 3cf56c13-4602-42be-a4b1-be76c5fa6886
	I0610 09:42:30.038037    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:30.534621    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:30.534651    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:30.534685    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:30.534699    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:30.538660    3695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:42:30.538675    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:30.538683    3695 round_trippers.go:580]     Audit-Id: 92d42224-6b08-488f-ae49-efb9753a1b60
	I0610 09:42:30.538691    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:30.538697    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:30.538704    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:30.538711    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:30.538717    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:30 GMT
	I0610 09:42:30.539033    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:30.539313    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:30.539321    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:30.539327    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:30.539332    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:30.541369    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:30.541378    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:30.541386    3695 round_trippers.go:580]     Audit-Id: f4c87e4d-f962-40f4-b013-e07d3110b506
	I0610 09:42:30.541393    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:30.541399    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:30.541405    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:30.541410    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:30.541415    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:30 GMT
	I0610 09:42:30.541627    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:31.034706    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:31.034729    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:31.034741    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:31.034752    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:31.037983    3695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:42:31.037999    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:31.038007    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:31.038013    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:31.038020    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:31 GMT
	I0610 09:42:31.038026    3695 round_trippers.go:580]     Audit-Id: 33e8254e-8020-448f-b3d4-2b9c119e91c3
	I0610 09:42:31.038039    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:31.038046    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:31.038132    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:31.038507    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:31.038516    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:31.038524    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:31.038531    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:31.040017    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:31.040027    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:31.040032    3695 round_trippers.go:580]     Audit-Id: 2a9aefbe-04de-4bde-85aa-373c8c9c1651
	I0610 09:42:31.040038    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:31.040044    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:31.040049    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:31.040054    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:31.040058    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:31 GMT
	I0610 09:42:31.040279    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:31.040454    3695 pod_ready.go:102] pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace has status "Ready":"False"
	I0610 09:42:31.533733    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:31.533758    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:31.533791    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:31.533804    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:31.536251    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:31.536270    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:31.536280    3695 round_trippers.go:580]     Audit-Id: d44b84b4-0d54-49b8-ac31-f207b7998c6a
	I0610 09:42:31.536288    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:31.536295    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:31.536303    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:31.536309    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:31.536316    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:31 GMT
	I0610 09:42:31.536403    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:31.536779    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:31.536787    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:31.536795    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:31.536802    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:31.538101    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:31.538110    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:31.538134    3695 round_trippers.go:580]     Audit-Id: 87b6cf6a-8bc9-4a38-aa90-59bf7a4846e9
	I0610 09:42:31.538145    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:31.538153    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:31.538158    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:31.538162    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:31.538167    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:31 GMT
	I0610 09:42:31.538257    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:32.034950    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:32.034968    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:32.034977    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:32.034986    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:32.037506    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:32.037519    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:32.037525    3695 round_trippers.go:580]     Audit-Id: 6f5368be-0584-4180-aa88-48431a83db89
	I0610 09:42:32.037530    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:32.037534    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:32.037539    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:32.037543    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:32.037548    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:32 GMT
	I0610 09:42:32.037622    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:32.037916    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:32.037922    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:32.037927    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:32.037934    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:32.039320    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:32.039330    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:32.039336    3695 round_trippers.go:580]     Audit-Id: 7edc11c8-f015-4a5f-a349-4b980d6de2c7
	I0610 09:42:32.039341    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:32.039352    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:32.039357    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:32.039362    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:32.039367    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:32 GMT
	I0610 09:42:32.039435    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:32.533823    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:32.533872    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:32.533879    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:32.533884    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:32.536009    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:32.536018    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:32.536024    3695 round_trippers.go:580]     Audit-Id: 30d60699-83b7-4830-bd0f-5de93dd0b74b
	I0610 09:42:32.536031    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:32.536036    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:32.536042    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:32.536049    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:32.536058    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:32 GMT
	I0610 09:42:32.536131    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:32.536417    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:32.536423    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:32.536430    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:32.536435    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:32.537845    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:32.537856    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:32.537862    3695 round_trippers.go:580]     Audit-Id: 4cc62937-3ea2-4ef5-87f1-898f5f39671d
	I0610 09:42:32.537881    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:32.537889    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:32.537894    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:32.537900    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:32.537904    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:32 GMT
	I0610 09:42:32.537973    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:33.035427    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:33.035455    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:33.035491    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:33.035529    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:33.038924    3695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:42:33.038956    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:33.038961    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:33.038966    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:33.038971    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:33.038976    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:33 GMT
	I0610 09:42:33.038981    3695 round_trippers.go:580]     Audit-Id: 57660dad-9a9d-4150-b80a-e2ca532fb967
	I0610 09:42:33.038985    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:33.039107    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:33.039392    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:33.039398    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:33.039404    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:33.039409    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:33.040709    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:33.040718    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:33.040724    3695 round_trippers.go:580]     Audit-Id: 832449ce-d48b-4a09-bf93-0c42fde999b0
	I0610 09:42:33.040735    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:33.040745    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:33.040752    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:33.040760    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:33.040770    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:33 GMT
	I0610 09:42:33.040895    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:33.041077    3695 pod_ready.go:102] pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace has status "Ready":"False"
	I0610 09:42:33.535185    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:33.535200    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:33.535207    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:33.535212    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:33.536947    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:33.536959    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:33.536966    3695 round_trippers.go:580]     Audit-Id: 400ffad4-94ee-46dd-85bc-f4495675607c
	I0610 09:42:33.536971    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:33.536977    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:33.536982    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:33.536987    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:33.536992    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:33 GMT
	I0610 09:42:33.537065    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:33.537348    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:33.537355    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:33.537360    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:33.537365    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:33.538709    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:33.538720    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:33.538728    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:33.538736    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:33 GMT
	I0610 09:42:33.538742    3695 round_trippers.go:580]     Audit-Id: 029d15ba-b63a-4128-9035-f256b7e8c49d
	I0610 09:42:33.538759    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:33.538774    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:33.538781    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:33.538870    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:34.035440    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:34.035466    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:34.035477    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:34.035526    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:34.038785    3695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:42:34.038801    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:34.038809    3695 round_trippers.go:580]     Audit-Id: c84faf6a-b5fe-45eb-85a1-53bdf8d007b8
	I0610 09:42:34.038816    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:34.038822    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:34.038829    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:34.038835    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:34.038847    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:34 GMT
	I0610 09:42:34.038942    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:34.039309    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:34.039317    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:34.039325    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:34.039333    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:34.041027    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:34.041037    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:34.041043    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:34.041048    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:34 GMT
	I0610 09:42:34.041052    3695 round_trippers.go:580]     Audit-Id: d2b3eae9-1745-4e50-ab31-038c2969ade1
	I0610 09:42:34.041068    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:34.041077    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:34.041082    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:34.041164    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:34.534758    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:34.534788    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:34.534801    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:34.534811    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:34.537814    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:34.537830    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:34.537839    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:34 GMT
	I0610 09:42:34.537845    3695 round_trippers.go:580]     Audit-Id: df10a462-f966-4c36-a5b3-3a512eb1d98c
	I0610 09:42:34.537853    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:34.537860    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:34.537866    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:34.537873    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:34.537966    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"417","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0610 09:42:34.538338    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:34.538346    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:34.538354    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:34.538362    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:34.539828    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:34.539837    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:34.539846    3695 round_trippers.go:580]     Audit-Id: 51d7788c-50b5-4b69-a21d-fd090c8552ba
	I0610 09:42:34.539853    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:34.539861    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:34.539869    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:34.539875    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:34.539880    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:34 GMT
	I0610 09:42:34.540000    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:35.035009    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-r9sjl
	I0610 09:42:35.035031    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.035043    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.035054    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.037974    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:35.037989    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.037997    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.038003    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.038010    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.038017    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.038023    3695 round_trippers.go:580]     Audit-Id: 34c2cee7-34e5-4d31-9b86-42288042e995
	I0610 09:42:35.038029    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.038124    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"516","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0610 09:42:35.038505    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:35.038513    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.038521    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.038551    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.040213    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:35.040222    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.040230    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.040241    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.040250    3695 round_trippers.go:580]     Audit-Id: d5d7d730-906f-499c-a0e8-c0b3ccf3075e
	I0610 09:42:35.040255    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.040261    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.040265    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.040339    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:35.040518    3695 pod_ready.go:92] pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace has status "Ready":"True"
	I0610 09:42:35.040525    3695 pod_ready.go:81] duration metric: took 8.01067946s waiting for pod "coredns-5d78c9869d-r9sjl" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.040531    3695 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.040560    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-826000
	I0610 09:42:35.040564    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.040570    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.040582    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.041955    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:35.041967    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.041973    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.041979    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.041984    3695 round_trippers.go:580]     Audit-Id: 28215339-780a-4c3d-b3ee-8589ba50d142
	I0610 09:42:35.041989    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.041993    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.041998    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.042066    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-826000","namespace":"kube-system","uid":"9b124acd-926c-431e-bc35-6b845e46eefa","resourceVersion":"497","creationTimestamp":"2023-06-10T16:40:40Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.64.12:2379","kubernetes.io/config.hash":"4257ff4fa7ee28e8b93d5e2345c387ba","kubernetes.io/config.mirror":"4257ff4fa7ee28e8b93d5e2345c387ba","kubernetes.io/config.seen":"2023-06-10T16:40:35.743576396Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6071 chars]
	I0610 09:42:35.042286    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:35.042292    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.042297    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.042303    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.043580    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:35.043589    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.043595    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.043600    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.043605    3695 round_trippers.go:580]     Audit-Id: 501d9c57-f1a7-4b4e-9d69-f82adcdcd77e
	I0610 09:42:35.043611    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.043619    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.043625    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.043693    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:35.043861    3695 pod_ready.go:92] pod "etcd-multinode-826000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:42:35.043867    3695 pod_ready.go:81] duration metric: took 3.331207ms waiting for pod "etcd-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.043874    3695 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.043903    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-826000
	I0610 09:42:35.043908    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.043914    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.043919    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.045098    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:35.045106    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.045111    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.045116    3695 round_trippers.go:580]     Audit-Id: 92320942-65ad-4d81-9fbf-674556e78b32
	I0610 09:42:35.045123    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.045129    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.045145    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.045156    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.045245    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-826000","namespace":"kube-system","uid":"f3b403ee-f6c6-47cb-baf3-3c15231b7625","resourceVersion":"494","creationTimestamp":"2023-06-10T16:40:40Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.64.12:8443","kubernetes.io/config.hash":"376ee319583f65c2f2f990eb64ecbee8","kubernetes.io/config.mirror":"376ee319583f65c2f2f990eb64ecbee8","kubernetes.io/config.seen":"2023-06-10T16:40:35.743576953Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7608 chars]
	I0610 09:42:35.045490    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:35.045496    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.045502    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.045507    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.046568    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:35.046575    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.046580    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.046584    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.046590    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.046594    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.046599    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.046605    3695 round_trippers.go:580]     Audit-Id: 03f28451-b58a-45b9-8d2b-ce6b46a6a93b
	I0610 09:42:35.046745    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:35.046940    3695 pod_ready.go:92] pod "kube-apiserver-multinode-826000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:42:35.046961    3695 pod_ready.go:81] duration metric: took 3.084111ms waiting for pod "kube-apiserver-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.046967    3695 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.047016    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-826000
	I0610 09:42:35.047022    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.047027    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.047033    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.048283    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:35.048291    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.048296    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.048304    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.048312    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.048320    3695 round_trippers.go:580]     Audit-Id: 00d898ea-1ca8-4f3c-aa9e-e30c11095ccd
	I0610 09:42:35.048328    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.048335    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.048406    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-826000","namespace":"kube-system","uid":"bc079029-af76-412a-b16a-e3bd76a3354a","resourceVersion":"498","creationTimestamp":"2023-06-10T16:40:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dadf7af017919599a45f7ef25c850049","kubernetes.io/config.mirror":"dadf7af017919599a45f7ef25c850049","kubernetes.io/config.seen":"2023-06-10T16:40:35.743573226Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7174 chars]
	I0610 09:42:35.048626    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:35.048631    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.048638    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.048643    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.049872    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:35.049880    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.049888    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.049893    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.049897    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.049902    3695 round_trippers.go:580]     Audit-Id: 9c745974-e6ba-4d87-b54a-8761b1b671cd
	I0610 09:42:35.049907    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.049911    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.049993    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:35.050164    3695 pod_ready.go:92] pod "kube-controller-manager-multinode-826000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:42:35.050171    3695 pod_ready.go:81] duration metric: took 3.197209ms waiting for pod "kube-controller-manager-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.050177    3695 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7dxj9" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.050208    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7dxj9
	I0610 09:42:35.050213    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.050219    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.050225    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.051605    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:35.051614    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.051620    3695 round_trippers.go:580]     Audit-Id: 8a7aa131-506d-4f85-b09d-c8e8bc94d4db
	I0610 09:42:35.051628    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.051635    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.051640    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.051645    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.051650    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.051712    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7dxj9","generateName":"kube-proxy-","namespace":"kube-system","uid":"52c8c8ff-4db3-4df4-9a64-dfa1f0221f20","resourceVersion":"477","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a54e86e6-ea1b-4f1a-a115-3032051cb5cd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a54e86e6-ea1b-4f1a-a115-3032051cb5cd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5736 chars]
	I0610 09:42:35.051931    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:35.051937    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.051942    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.051950    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.053376    3695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 09:42:35.053394    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.053406    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.053416    3695 round_trippers.go:580]     Audit-Id: 57b63eea-b46b-4214-87d0-755104ace0a7
	I0610 09:42:35.053428    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.053452    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.053460    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.053466    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.053542    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:35.053716    3695 pod_ready.go:92] pod "kube-proxy-7dxj9" in "kube-system" namespace has status "Ready":"True"
	I0610 09:42:35.053723    3695 pod_ready.go:81] duration metric: took 3.542236ms waiting for pod "kube-proxy-7dxj9" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.053728    3695 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.237072    3695 request.go:628] Waited for 183.294197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-826000
	I0610 09:42:35.237166    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-826000
	I0610 09:42:35.237176    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.237221    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.237233    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.240187    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:35.240203    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.240210    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.240219    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.240229    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.240239    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.240247    3695 round_trippers.go:580]     Audit-Id: 4a7d1f19-2809-4c56-9815-e4a671b541d6
	I0610 09:42:35.240253    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.240584    3695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-826000","namespace":"kube-system","uid":"49d5bdcb-168b-4719-917a-80bd9859ccb6","resourceVersion":"492","creationTimestamp":"2023-06-10T16:40:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"07dc3f9536175f6e9e243e6c2d78c2e4","kubernetes.io/config.mirror":"07dc3f9536175f6e9e243e6c2d78c2e4","kubernetes.io/config.seen":"2023-06-10T16:40:42.865304771Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4904 chars]
	I0610 09:42:35.436646    3695 request.go:628] Waited for 195.767965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:35.436704    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes/multinode-826000
	I0610 09:42:35.436716    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.436728    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.436741    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.439550    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:35.439565    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.439573    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.439579    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.439586    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.439593    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.439599    3695 round_trippers.go:580]     Audit-Id: 45e4af8f-c7da-41f8-ada3-bbd6e21159cc
	I0610 09:42:35.439609    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.439965    3695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-06-10T16:40:39Z","fieldsType":"FieldsV1","fi [truncated 5004 chars]
	I0610 09:42:35.440235    3695 pod_ready.go:92] pod "kube-scheduler-multinode-826000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:42:35.440243    3695 pod_ready.go:81] duration metric: took 386.511935ms waiting for pod "kube-scheduler-multinode-826000" in "kube-system" namespace to be "Ready" ...
	I0610 09:42:35.440252    3695 pod_ready.go:38] duration metric: took 8.414496794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:42:35.440265    3695 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:42:35.440322    3695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:42:35.449255    3695 command_runner.go:130] > 1614
	I0610 09:42:35.449389    3695 api_server.go:72] duration metric: took 15.113675188s to wait for apiserver process to appear ...
	I0610 09:42:35.449396    3695 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:42:35.449405    3695 api_server.go:253] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0610 09:42:35.453199    3695 api_server.go:279] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0610 09:42:35.453235    3695 round_trippers.go:463] GET https://192.168.64.12:8443/version
	I0610 09:42:35.453240    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.453258    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.453267    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.454157    3695 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 09:42:35.454164    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.454169    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.454199    3695 round_trippers.go:580]     Audit-Id: aa4fb587-20a4-49fd-bf2c-98834f7f32b1
	I0610 09:42:35.454206    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.454211    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.454216    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.454224    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.454229    3695 round_trippers.go:580]     Content-Length: 263
	I0610 09:42:35.454239    3695 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.2",
	  "gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
	  "gitTreeState": "clean",
	  "buildDate": "2023-05-17T14:13:28Z",
	  "goVersion": "go1.20.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 09:42:35.454274    3695 api_server.go:141] control plane version: v1.27.2
	I0610 09:42:35.454281    3695 api_server.go:131] duration metric: took 4.880547ms to wait for apiserver health ...
	I0610 09:42:35.454285    3695 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:42:35.635915    3695 request.go:628] Waited for 181.577469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:42:35.636035    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:42:35.636047    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.636059    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.636070    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.640368    3695 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 09:42:35.640382    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.640391    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.640419    3695 round_trippers.go:580]     Audit-Id: 8ee44fcb-ef85-4304-84a0-45fddb7a3050
	I0610 09:42:35.640431    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.640440    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.640448    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.640456    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.641059    3695 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"520"},"items":[{"metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"516","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55739 chars]
	I0610 09:42:35.642502    3695 system_pods.go:59] 8 kube-system pods found
	I0610 09:42:35.642511    3695 system_pods.go:61] "coredns-5d78c9869d-r9sjl" [d3e6fbc7-ad9e-47a1-8592-9a22062f0845] Running
	I0610 09:42:35.642514    3695 system_pods.go:61] "etcd-multinode-826000" [9b124acd-926c-431e-bc35-6b845e46eefa] Running
	I0610 09:42:35.642518    3695 system_pods.go:61] "kindnet-9r8df" [39c3c671-53e3-4745-ad44-d4d88bac2e7b] Running
	I0610 09:42:35.642521    3695 system_pods.go:61] "kube-apiserver-multinode-826000" [f3b403ee-f6c6-47cb-baf3-3c15231b7625] Running
	I0610 09:42:35.642524    3695 system_pods.go:61] "kube-controller-manager-multinode-826000" [bc079029-af76-412a-b16a-e3bd76a3354a] Running
	I0610 09:42:35.642527    3695 system_pods.go:61] "kube-proxy-7dxj9" [52c8c8ff-4db3-4df4-9a64-dfa1f0221f20] Running
	I0610 09:42:35.642531    3695 system_pods.go:61] "kube-scheduler-multinode-826000" [49d5bdcb-168b-4719-917a-80bd9859ccb6] Running
	I0610 09:42:35.642534    3695 system_pods.go:61] "storage-provisioner" [045816f3-b7b8-4909-8dc7-42d6d795adb1] Running
	I0610 09:42:35.642537    3695 system_pods.go:74] duration metric: took 188.24974ms to wait for pod list to return data ...
	I0610 09:42:35.642541    3695 default_sa.go:34] waiting for default service account to be created ...
	I0610 09:42:35.836498    3695 request.go:628] Waited for 193.882802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
	I0610 09:42:35.836620    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/default/serviceaccounts
	I0610 09:42:35.836630    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:35.836644    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:35.836655    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:35.839560    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:35.839573    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:35.839581    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:35.839594    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:35.839602    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:35.839608    3695 round_trippers.go:580]     Content-Length: 261
	I0610 09:42:35.839615    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:35 GMT
	I0610 09:42:35.839621    3695 round_trippers.go:580]     Audit-Id: de5781b9-7fdb-4c20-ba92-f552b2d26859
	I0610 09:42:35.839628    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:35.839657    3695 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"520"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b8380443-31b8-47c2-9195-5a380347a27a","resourceVersion":"318","creationTimestamp":"2023-06-10T16:40:55Z"}}]}
	I0610 09:42:35.839819    3695 default_sa.go:45] found service account: "default"
	I0610 09:42:35.839829    3695 default_sa.go:55] duration metric: took 197.284817ms for default service account to be created ...
	I0610 09:42:35.839836    3695 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 09:42:36.036663    3695 request.go:628] Waited for 196.710165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:42:36.036707    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/namespaces/kube-system/pods
	I0610 09:42:36.036715    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:36.036733    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:36.036748    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:36.040557    3695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 09:42:36.040577    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:36.040589    3695 round_trippers.go:580]     Audit-Id: 7dc33542-0dc4-43de-99e5-08a78ce44899
	I0610 09:42:36.040601    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:36.040612    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:36.040626    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:36.040637    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:36.040648    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:36 GMT
	I0610 09:42:36.041139    3695 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"520"},"items":[{"metadata":{"name":"coredns-5d78c9869d-r9sjl","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"d3e6fbc7-ad9e-47a1-8592-9a22062f0845","resourceVersion":"516","creationTimestamp":"2023-06-10T16:40:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a5f182d0-7206-4fb3-8759-811bf207f611","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T16:40:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5f182d0-7206-4fb3-8759-811bf207f611\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55739 chars]
	I0610 09:42:36.042447    3695 system_pods.go:86] 8 kube-system pods found
	I0610 09:42:36.042457    3695 system_pods.go:89] "coredns-5d78c9869d-r9sjl" [d3e6fbc7-ad9e-47a1-8592-9a22062f0845] Running
	I0610 09:42:36.042461    3695 system_pods.go:89] "etcd-multinode-826000" [9b124acd-926c-431e-bc35-6b845e46eefa] Running
	I0610 09:42:36.042464    3695 system_pods.go:89] "kindnet-9r8df" [39c3c671-53e3-4745-ad44-d4d88bac2e7b] Running
	I0610 09:42:36.042468    3695 system_pods.go:89] "kube-apiserver-multinode-826000" [f3b403ee-f6c6-47cb-baf3-3c15231b7625] Running
	I0610 09:42:36.042471    3695 system_pods.go:89] "kube-controller-manager-multinode-826000" [bc079029-af76-412a-b16a-e3bd76a3354a] Running
	I0610 09:42:36.042474    3695 system_pods.go:89] "kube-proxy-7dxj9" [52c8c8ff-4db3-4df4-9a64-dfa1f0221f20] Running
	I0610 09:42:36.042478    3695 system_pods.go:89] "kube-scheduler-multinode-826000" [49d5bdcb-168b-4719-917a-80bd9859ccb6] Running
	I0610 09:42:36.042481    3695 system_pods.go:89] "storage-provisioner" [045816f3-b7b8-4909-8dc7-42d6d795adb1] Running
	I0610 09:42:36.042485    3695 system_pods.go:126] duration metric: took 202.646243ms to wait for k8s-apps to be running ...
	I0610 09:42:36.042492    3695 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 09:42:36.042551    3695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:42:36.051774    3695 system_svc.go:56] duration metric: took 9.281177ms WaitForService to wait for kubelet.
	I0610 09:42:36.051790    3695 kubeadm.go:581] duration metric: took 15.716078347s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 09:42:36.051802    3695 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:42:36.237077    3695 request.go:628] Waited for 185.223122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.64.12:8443/api/v1/nodes
	I0610 09:42:36.237225    3695 round_trippers.go:463] GET https://192.168.64.12:8443/api/v1/nodes
	I0610 09:42:36.237236    3695 round_trippers.go:469] Request Headers:
	I0610 09:42:36.237248    3695 round_trippers.go:473]     Accept: application/json, */*
	I0610 09:42:36.237261    3695 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 09:42:36.240061    3695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 09:42:36.240077    3695 round_trippers.go:577] Response Headers:
	I0610 09:42:36.240085    3695 round_trippers.go:580]     Audit-Id: 0b752fb4-c741-4f2d-80c6-b3d72d593a5a
	I0610 09:42:36.240092    3695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 09:42:36.240098    3695 round_trippers.go:580]     Content-Type: application/json
	I0610 09:42:36.240104    3695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3eb62923-11ee-4558-8e34-7d780ff0d56a
	I0610 09:42:36.240116    3695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1bb0c946-70d8-4e5b-92e0-64e2d81a3ec0
	I0610 09:42:36.240125    3695 round_trippers.go:580]     Date: Sat, 10 Jun 2023 16:42:36 GMT
	I0610 09:42:36.240354    3695 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"520"},"items":[{"metadata":{"name":"multinode-826000","uid":"bdbe7de1-a381-440c-ad2e-b5aaeb1e3974","resourceVersion":"500","creationTimestamp":"2023-06-10T16:40:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-826000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"eafc8e84d7336f18f4fb303d71d15fbd84fd16d5","minikube.k8s.io/name":"multinode-826000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T09_40_43_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5057 chars]
	I0610 09:42:36.240613    3695 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0610 09:42:36.240625    3695 node_conditions.go:123] node cpu capacity is 2
	I0610 09:42:36.240634    3695 node_conditions.go:105] duration metric: took 188.828368ms to run NodePressure ...
	I0610 09:42:36.240643    3695 start.go:228] waiting for startup goroutines ...
	I0610 09:42:36.240651    3695 start.go:233] waiting for cluster config update ...
	I0610 09:42:36.240663    3695 start.go:242] writing updated cluster config ...
	I0610 09:42:36.241130    3695 ssh_runner.go:195] Run: rm -f paused
	I0610 09:42:36.279186    3695 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 09:42:36.300044    3695 out.go:177] 
	W0610 09:42:36.321229    3695 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 09:42:36.342492    3695 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 09:42:36.386542    3695 out.go:177] * Done! kubectl is now configured to use "multinode-826000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:41:28 UTC, ends at Sat 2023-06-10 16:42:37 UTC. --
	Jun 10 16:42:17 multinode-826000 dockerd[828]: time="2023-06-10T16:42:17.823234762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:17 multinode-826000 cri-dockerd[1030]: time="2023-06-10T16:42:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f9883f5613a3b32b4c04b22f9cc3daebe434f9f8234ac85543e9fe7bb156b517/resolv.conf as [nameserver 192.168.64.1]"
	Jun 10 16:42:17 multinode-826000 dockerd[828]: time="2023-06-10T16:42:17.959582656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:42:17 multinode-826000 dockerd[828]: time="2023-06-10T16:42:17.959698649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:17 multinode-826000 dockerd[828]: time="2023-06-10T16:42:17.959725050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:42:17 multinode-826000 dockerd[828]: time="2023-06-10T16:42:17.959877219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:18 multinode-826000 cri-dockerd[1030]: time="2023-06-10T16:42:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ebb2772ed9c64ffe05f3739c0b90449ca5540b77a3fbe1c50c067c313497a88f/resolv.conf as [nameserver 192.168.64.1]"
	Jun 10 16:42:18 multinode-826000 dockerd[828]: time="2023-06-10T16:42:18.409075726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:42:18 multinode-826000 dockerd[828]: time="2023-06-10T16:42:18.409192268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:18 multinode-826000 dockerd[828]: time="2023-06-10T16:42:18.409219071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:42:18 multinode-826000 dockerd[828]: time="2023-06-10T16:42:18.409235270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:20 multinode-826000 cri-dockerd[1030]: time="2023-06-10T16:42:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b42123042975e0a2733d510ff5b7dff436088ae55c7330fdf05be6f5d7d18795/resolv.conf as [nameserver 192.168.64.1]"
	Jun 10 16:42:20 multinode-826000 dockerd[828]: time="2023-06-10T16:42:20.549255559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:42:20 multinode-826000 dockerd[828]: time="2023-06-10T16:42:20.549317273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:20 multinode-826000 dockerd[828]: time="2023-06-10T16:42:20.549341378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:42:20 multinode-826000 dockerd[828]: time="2023-06-10T16:42:20.549352886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.312762117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.312808662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.312823461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.312832950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:33 multinode-826000 cri-dockerd[1030]: time="2023-06-10T16:42:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7c86056c94d3df26c2732ba843da6cb214d22264baf724bc497ce210e23d6ef/resolv.conf as [nameserver 192.168.64.1]"
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.687274491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.687428685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.687505429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.687562435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID
	665d2bfd37808       ead0a4a53df89                                                                              4 seconds ago        Running             coredns                   1                   d7c86056c94d3
	45d0df95b7154       b0b1fa0f58c6e                                                                              17 seconds ago       Running             kindnet-cni               1                   b42123042975e
	6785f017705fb       6e38f40d628db                                                                              19 seconds ago       Running             storage-provisioner       1                   ebb2772ed9c64
	5cd149e6a33f9       b8aa50768fd67                                                                              20 seconds ago       Running             kube-proxy                1                   f9883f5613a3b
	1a20ece454029       ac2b7465ebba9                                                                              25 seconds ago       Running             kube-controller-manager   1                   e1cb83b607e86
	b6511a7a9032c       86b6af7dd652c                                                                              25 seconds ago       Running             etcd                      1                   e6d149ccc12e3
	8ba9a16fd0bb2       89e70da428d29                                                                              25 seconds ago       Running             kube-scheduler            1                   bc491bac713bd
	492eebc8d7c90       c5b13e4f7806d                                                                              25 seconds ago       Running             kube-apiserver            1                   5e4eac218705e
	e628a3dfc251b       6e38f40d628db                                                                              About a minute ago   Exited              storage-provisioner       0                   b080919c1ecfc
	12619bc2bf572       ead0a4a53df89                                                                              About a minute ago   Exited              coredns                   0                   2494e4985fe38
	dcf36c339d8e9       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974   About a minute ago   Exited              kindnet-cni               0                   fe54448abb1ac
	3246cc4a932c7       b8aa50768fd67                                                                              About a minute ago   Exited              kube-proxy                0                   f4c3162aaa5c0
	ba32349cda752       86b6af7dd652c                                                                              2 minutes ago        Exited              etcd                      0                   2d94b625d191b
	c0054420e3b8f       89e70da428d29                                                                              2 minutes ago        Exited              kube-scheduler            0                   1e876d1d39ca0
	ae72b9818103a       ac2b7465ebba9                                                                              2 minutes ago        Exited              kube-controller-manager   0                   8f3a0f3eaddd1
	0a2f2c979d7b0       c5b13e4f7806d                                                                              2 minutes ago        Exited              kube-apiserver            0                   2023590fd394b
	
	* 
	* ==> coredns [12619bc2bf57] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55077 - 62156 "HINFO IN 783487967199058609.7483377405974833132. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.004630964s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [665d2bfd3780] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60235 - 30170 "HINFO IN 7061034121563959463.7423390287495701571. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004674814s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-826000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-826000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=multinode-826000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_40_43_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:40:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-826000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:42:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:42:26 +0000   Sat, 10 Jun 2023 16:40:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:42:26 +0000   Sat, 10 Jun 2023 16:40:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:42:26 +0000   Sat, 10 Jun 2023 16:40:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:42:26 +0000   Sat, 10 Jun 2023 16:42:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.12
	  Hostname:    multinode-826000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 549be7735a0542d0a254ccc3bb88af35
	  System UUID:                39eb11ee-0000-0000-b579-f01898ef957c
	  Boot ID:                    f1a567ca-36de-47f1-bba2-37d393f013e9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-r9sjl                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     103s
	  kube-system                 etcd-multinode-826000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         118s
	  kube-system                 kindnet-9r8df                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      103s
	  kube-system                 kube-apiserver-multinode-826000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-multinode-826000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-proxy-7dxj9                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-scheduler-multinode-826000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 100s               kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     116s               kubelet          Node multinode-826000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node multinode-826000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node multinode-826000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node multinode-826000 event: Registered Node multinode-826000 in Controller
	  Normal  NodeReady                93s                kubelet          Node multinode-826000 status is now: NodeReady
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node multinode-826000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node multinode-826000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node multinode-826000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node multinode-826000 event: Registered Node multinode-826000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.027851] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +4.591261] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.255545] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.040456] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.894050] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +26.539148] systemd-fstab-generator[522]: Ignoring "noauto" for root device
	[  +0.080871] systemd-fstab-generator[533]: Ignoring "noauto" for root device
	[  +0.787752] systemd-fstab-generator[750]: Ignoring "noauto" for root device
	[  +0.214988] systemd-fstab-generator[789]: Ignoring "noauto" for root device
	[  +0.086105] systemd-fstab-generator[800]: Ignoring "noauto" for root device
	[  +0.094774] systemd-fstab-generator[813]: Ignoring "noauto" for root device
	[  +1.354206] systemd-fstab-generator[975]: Ignoring "noauto" for root device
	[  +0.089485] systemd-fstab-generator[986]: Ignoring "noauto" for root device
	[  +0.096423] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +0.091733] systemd-fstab-generator[1008]: Ignoring "noauto" for root device
	[  +0.098751] systemd-fstab-generator[1022]: Ignoring "noauto" for root device
	[Jun10 16:42] systemd-fstab-generator[1260]: Ignoring "noauto" for root device
	[  +0.239018] kauditd_printk_skb: 67 callbacks suppressed
	[ +17.550402] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [b6511a7a9032] <==
	* {"level":"info","ts":"2023-06-10T16:42:13.481Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T16:42:13.481Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T16:42:13.481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 switched to configuration voters=(9888510509761246144)"}
	{"level":"info","ts":"2023-06-10T16:42:13.481Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","added-peer-id":"893b0beac40933c0","added-peer-peer-urls":["https://192.168.64.12:2380"]}
	{"level":"info","ts":"2023-06-10T16:42:13.481Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:42:13.482Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:42:13.485Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-10T16:42:13.486Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-06-10T16:42:13.486Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"893b0beac40933c0","initial-advertise-peer-urls":["https://192.168.64.12:2380"],"listen-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.12:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-10T16:42:13.486Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-10T16:42:13.486Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-06-10T16:42:15.371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 is starting a new election at term 2"}
	{"level":"info","ts":"2023-06-10T16:42:15.371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:42:15.371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgPreVoteResp from 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-06-10T16:42:15.371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became candidate at term 3"}
	{"level":"info","ts":"2023-06-10T16:42:15.372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgVoteResp from 893b0beac40933c0 at term 3"}
	{"level":"info","ts":"2023-06-10T16:42:15.372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became leader at term 3"}
	{"level":"info","ts":"2023-06-10T16:42:15.372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 893b0beac40933c0 elected leader 893b0beac40933c0 at term 3"}
	{"level":"info","ts":"2023-06-10T16:42:15.374Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"893b0beac40933c0","local-member-attributes":"{Name:multinode-826000 ClientURLs:[https://192.168.64.12:2379]}","request-path":"/0/members/893b0beac40933c0/attributes","cluster-id":"51ecae2d8304f353","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:42:15.374Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:42:15.374Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:42:15.374Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:42:15.374Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:42:15.375Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.12:2379"}
	{"level":"info","ts":"2023-06-10T16:42:15.375Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [ba32349cda75] <==
	* {"level":"info","ts":"2023-06-10T16:40:37.818Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgPreVoteResp from 893b0beac40933c0 at term 1"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgVoteResp from 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became leader at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 893b0beac40933c0 elected leader 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.987Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"893b0beac40933c0","local-member-attributes":"{Name:multinode-826000 ClientURLs:[https://192.168.64.12:2379]}","request-path":"/0/members/893b0beac40933c0/attributes","cluster-id":"51ecae2d8304f353","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:40:37.987Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:40:37.990Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T16:40:37.990Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:40:37.991Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.12:2379"}
	{"level":"info","ts":"2023-06-10T16:40:37.991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:40:37.994Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:41:11.906Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-06-10T16:41:11.906Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"multinode-826000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"]}
	{"level":"info","ts":"2023-06-10T16:41:11.914Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"893b0beac40933c0","current-leader-member-id":"893b0beac40933c0"}
	{"level":"info","ts":"2023-06-10T16:41:11.915Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-06-10T16:41:11.916Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-06-10T16:41:11.916Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"multinode-826000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"]}
	
	* 
	* ==> kernel <==
	*  16:42:38 up 1 min,  0 users,  load average: 0.39, 0.11, 0.04
	Linux multinode-826000 5.10.57 #1 SMP Wed Jun 7 04:45:40 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [45d0df95b715] <==
	* I0610 16:42:20.819996       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 16:42:20.820122       1 main.go:107] hostIP = 192.168.64.12
	podIP = 192.168.64.12
	I0610 16:42:20.820364       1 main.go:116] setting mtu 1500 for CNI 
	I0610 16:42:20.820391       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 16:42:20.820426       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 16:42:21.118301       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:42:21.118339       1 main.go:227] handling current node
	I0610 16:42:31.126425       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:42:31.126471       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [dcf36c339d8e] <==
	* I0610 16:41:02.415980       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 16:41:02.416124       1 main.go:107] hostIP = 192.168.64.12
	podIP = 192.168.64.12
	I0610 16:41:02.416216       1 main.go:116] setting mtu 1500 for CNI 
	I0610 16:41:02.416259       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 16:41:02.416285       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 16:41:02.724012       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:41:02.724049       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [0a2f2c979d7b] <==
	* W0610 16:41:11.911740       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 16:41:11.911749       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 16:41:11.911767       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	I0610 16:41:11.987257       1 controller.go:228] Shutting down kubernetes service endpoint reconciler
	
	* 
	* ==> kube-apiserver [492eebc8d7c9] <==
	* I0610 16:42:16.420434       1 naming_controller.go:291] Starting NamingConditionController
	I0610 16:42:16.420480       1 establishing_controller.go:76] Starting EstablishingController
	I0610 16:42:16.420513       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0610 16:42:16.420537       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0610 16:42:16.420583       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0610 16:42:16.455560       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0610 16:42:16.456736       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0610 16:42:16.472320       1 shared_informer.go:318] Caches are synced for configmaps
	I0610 16:42:16.475984       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:42:16.476186       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0610 16:42:16.476214       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0610 16:42:16.476553       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 16:42:16.479358       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 16:42:16.482584       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 16:42:16.483838       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0610 16:42:16.546063       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0610 16:42:17.138175       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:42:17.376939       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 16:42:19.159598       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 16:42:19.246314       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 16:42:19.251533       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 16:42:19.286774       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:42:19.291252       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 16:42:28.948888       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 16:42:28.960858       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [1a20ece45402] <==
	* I0610 16:42:28.949717       1 shared_informer.go:318] Caches are synced for PV protection
	I0610 16:42:28.952799       1 shared_informer.go:318] Caches are synced for ephemeral
	I0610 16:42:28.954075       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0610 16:42:28.959193       1 shared_informer.go:318] Caches are synced for cronjob
	I0610 16:42:28.964799       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 16:42:28.970251       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 16:42:28.970345       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 16:42:28.974760       1 shared_informer.go:318] Caches are synced for taint
	I0610 16:42:28.974902       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0610 16:42:28.975136       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-826000"
	I0610 16:42:28.975188       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0610 16:42:28.974928       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0610 16:42:28.975439       1 taint_manager.go:211] "Sending events to api server"
	I0610 16:42:28.975620       1 event.go:307] "Event occurred" object="multinode-826000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-826000 event: Registered Node multinode-826000 in Controller"
	I0610 16:42:28.977173       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0610 16:42:28.978342       1 shared_informer.go:318] Caches are synced for stateful set
	I0610 16:42:29.003625       1 shared_informer.go:318] Caches are synced for HPA
	I0610 16:42:29.036274       1 shared_informer.go:318] Caches are synced for deployment
	I0610 16:42:29.049004       1 shared_informer.go:318] Caches are synced for disruption
	I0610 16:42:29.051106       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:42:29.071688       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:42:29.088150       1 shared_informer.go:318] Caches are synced for attach detach
	I0610 16:42:29.480175       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:42:29.480197       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0610 16:42:29.486250       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [ae72b9818103] <==
	* I0610 16:40:55.682025       1 shared_informer.go:318] Caches are synced for PVC protection
	I0610 16:40:55.682087       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0610 16:40:55.682463       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0610 16:40:55.682528       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0610 16:40:55.682969       1 shared_informer.go:318] Caches are synced for job
	I0610 16:40:55.685211       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0610 16:40:55.687715       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0610 16:40:55.717624       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-zhp88"
	I0610 16:40:55.749857       1 shared_informer.go:318] Caches are synced for attach detach
	I0610 16:40:55.751495       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-r9sjl"
	I0610 16:40:55.772147       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:40:55.830662       1 shared_informer.go:318] Caches are synced for taint
	I0610 16:40:55.830803       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0610 16:40:55.830866       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0610 16:40:55.830888       1 taint_manager.go:211] "Sending events to api server"
	I0610 16:40:55.831285       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-826000"
	I0610 16:40:55.831308       1 node_lifecycle_controller.go:1027] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0610 16:40:55.831403       1 event.go:307] "Event occurred" object="multinode-826000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-826000 event: Registered Node multinode-826000 in Controller"
	I0610 16:40:55.842686       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:40:55.896813       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0610 16:40:55.933788       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-zhp88"
	I0610 16:40:56.194791       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:40:56.231277       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:40:56.231338       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0610 16:41:05.833553       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	* 
	* ==> kube-proxy [3246cc4a932c] <==
	* I0610 16:40:57.738817       1 node.go:141] Successfully retrieved node IP: 192.168.64.12
	I0610 16:40:57.738885       1 server_others.go:110] "Detected node IP" address="192.168.64.12"
	I0610 16:40:57.738899       1 server_others.go:551] "Using iptables proxy"
	I0610 16:40:57.763801       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:40:57.763883       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:40:57.764218       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:40:57.764791       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:40:57.764844       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:40:57.766097       1 config.go:188] "Starting service config controller"
	I0610 16:40:57.766529       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:40:57.767401       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:40:57.767453       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:40:57.766609       1 config.go:315] "Starting node config controller"
	I0610 16:40:57.768401       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:40:57.867666       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:40:57.867833       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:40:57.868467       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [5cd149e6a33f] <==
	* I0610 16:42:18.493700       1 node.go:141] Successfully retrieved node IP: 192.168.64.12
	I0610 16:42:18.493981       1 server_others.go:110] "Detected node IP" address="192.168.64.12"
	I0610 16:42:18.494277       1 server_others.go:551] "Using iptables proxy"
	I0610 16:42:18.767151       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:42:18.767185       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:42:18.767499       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:42:18.768341       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:42:18.768371       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:42:18.770179       1 config.go:188] "Starting service config controller"
	I0610 16:42:18.770461       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:42:18.770809       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:42:18.770836       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:42:18.773322       1 config.go:315] "Starting node config controller"
	I0610 16:42:18.773349       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:42:18.871310       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:42:18.871449       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:42:18.873434       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8ba9a16fd0bb] <==
	* I0610 16:42:14.458749       1 serving.go:348] Generated self-signed cert in-memory
	W0610 16:42:16.436078       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 16:42:16.436188       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 16:42:16.436234       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 16:42:16.436250       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 16:42:16.461624       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0610 16:42:16.461709       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:42:16.464450       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0610 16:42:16.465341       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 16:42:16.465615       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 16:42:16.468929       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 16:42:16.565769       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c0054420e3b8] <==
	* E0610 16:40:39.958938       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 16:40:39.959025       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:40:39.959136       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 16:40:39.959229       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 16:40:39.959323       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 16:40:39.959392       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:40:39.959472       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:40:39.959730       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 16:40:39.959782       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 16:40:39.959898       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:40:39.959973       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 16:40:39.960084       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:40:39.960134       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 16:40:40.792534       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:40:40.792622       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 16:40:40.962518       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:40:40.962536       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:40:40.981767       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 16:40:40.981852       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 16:40:41.339329       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 16:41:11.929239       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0610 16:41:11.929285       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0610 16:41:11.929426       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0610 16:41:11.929712       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0610 16:41:11.929767       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:41:28 UTC, ends at Sat 2023-06-10 16:42:39 UTC. --
	Jun 10 16:42:17 multinode-826000 kubelet[1266]: I0610 16:42:17.450966    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52c8c8ff-4db3-4df4-9a64-dfa1f0221f20-lib-modules\") pod \"kube-proxy-7dxj9\" (UID: \"52c8c8ff-4db3-4df4-9a64-dfa1f0221f20\") " pod="kube-system/kube-proxy-7dxj9"
	Jun 10 16:42:17 multinode-826000 kubelet[1266]: I0610 16:42:17.451046    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39c3c671-53e3-4745-ad44-d4d88bac2e7b-lib-modules\") pod \"kindnet-9r8df\" (UID: \"39c3c671-53e3-4745-ad44-d4d88bac2e7b\") " pod="kube-system/kindnet-9r8df"
	Jun 10 16:42:17 multinode-826000 kubelet[1266]: I0610 16:42:17.451065    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhj7n\" (UniqueName: \"kubernetes.io/projected/39c3c671-53e3-4745-ad44-d4d88bac2e7b-kube-api-access-hhj7n\") pod \"kindnet-9r8df\" (UID: \"39c3c671-53e3-4745-ad44-d4d88bac2e7b\") " pod="kube-system/kindnet-9r8df"
	Jun 10 16:42:17 multinode-826000 kubelet[1266]: I0610 16:42:17.451081    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/045816f3-b7b8-4909-8dc7-42d6d795adb1-tmp\") pod \"storage-provisioner\" (UID: \"045816f3-b7b8-4909-8dc7-42d6d795adb1\") " pod="kube-system/storage-provisioner"
	Jun 10 16:42:17 multinode-826000 kubelet[1266]: I0610 16:42:17.451095    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume\") pod \"coredns-5d78c9869d-r9sjl\" (UID: \"d3e6fbc7-ad9e-47a1-8592-9a22062f0845\") " pod="kube-system/coredns-5d78c9869d-r9sjl"
	Jun 10 16:42:17 multinode-826000 kubelet[1266]: I0610 16:42:17.451108    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/39c3c671-53e3-4745-ad44-d4d88bac2e7b-cni-cfg\") pod \"kindnet-9r8df\" (UID: \"39c3c671-53e3-4745-ad44-d4d88bac2e7b\") " pod="kube-system/kindnet-9r8df"
	Jun 10 16:42:17 multinode-826000 kubelet[1266]: I0610 16:42:17.451116    1266 reconciler.go:41] "Reconciler: start to sync state"
	Jun 10 16:42:17 multinode-826000 kubelet[1266]: E0610 16:42:17.552346    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 10 16:42:17 multinode-826000 kubelet[1266]: E0610 16:42:17.552408    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume podName:d3e6fbc7-ad9e-47a1-8592-9a22062f0845 nodeName:}" failed. No retries permitted until 2023-06-10 16:42:18.052395876 +0000 UTC m=+6.817174293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume") pod "coredns-5d78c9869d-r9sjl" (UID: "d3e6fbc7-ad9e-47a1-8592-9a22062f0845") : object "kube-system"/"coredns" not registered
	Jun 10 16:42:18 multinode-826000 kubelet[1266]: E0610 16:42:18.057609    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 10 16:42:18 multinode-826000 kubelet[1266]: E0610 16:42:18.057660    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume podName:d3e6fbc7-ad9e-47a1-8592-9a22062f0845 nodeName:}" failed. No retries permitted until 2023-06-10 16:42:19.057649321 +0000 UTC m=+7.822427738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume") pod "coredns-5d78c9869d-r9sjl" (UID: "d3e6fbc7-ad9e-47a1-8592-9a22062f0845") : object "kube-system"/"coredns" not registered
	Jun 10 16:42:18 multinode-826000 kubelet[1266]: E0610 16:42:18.424598    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-r9sjl" podUID=d3e6fbc7-ad9e-47a1-8592-9a22062f0845
	Jun 10 16:42:19 multinode-826000 kubelet[1266]: E0610 16:42:19.064548    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 10 16:42:19 multinode-826000 kubelet[1266]: E0610 16:42:19.064630    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume podName:d3e6fbc7-ad9e-47a1-8592-9a22062f0845 nodeName:}" failed. No retries permitted until 2023-06-10 16:42:21.064619874 +0000 UTC m=+9.829398292 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume") pod "coredns-5d78c9869d-r9sjl" (UID: "d3e6fbc7-ad9e-47a1-8592-9a22062f0845") : object "kube-system"/"coredns" not registered
	Jun 10 16:42:20 multinode-826000 kubelet[1266]: E0610 16:42:20.460133    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-r9sjl" podUID=d3e6fbc7-ad9e-47a1-8592-9a22062f0845
	Jun 10 16:42:20 multinode-826000 kubelet[1266]: I0610 16:42:20.460162    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42123042975e0a2733d510ff5b7dff436088ae55c7330fdf05be6f5d7d18795"
	Jun 10 16:42:21 multinode-826000 kubelet[1266]: E0610 16:42:21.080306    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 10 16:42:21 multinode-826000 kubelet[1266]: E0610 16:42:21.080395    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume podName:d3e6fbc7-ad9e-47a1-8592-9a22062f0845 nodeName:}" failed. No retries permitted until 2023-06-10 16:42:25.080381955 +0000 UTC m=+13.845160380 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume") pod "coredns-5d78c9869d-r9sjl" (UID: "d3e6fbc7-ad9e-47a1-8592-9a22062f0845") : object "kube-system"/"coredns" not registered
	Jun 10 16:42:21 multinode-826000 kubelet[1266]: E0610 16:42:21.503044    1266 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Jun 10 16:42:22 multinode-826000 kubelet[1266]: E0610 16:42:22.424403    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-r9sjl" podUID=d3e6fbc7-ad9e-47a1-8592-9a22062f0845
	Jun 10 16:42:24 multinode-826000 kubelet[1266]: E0610 16:42:24.424780    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-r9sjl" podUID=d3e6fbc7-ad9e-47a1-8592-9a22062f0845
	Jun 10 16:42:25 multinode-826000 kubelet[1266]: E0610 16:42:25.114004    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 10 16:42:25 multinode-826000 kubelet[1266]: E0610 16:42:25.114335    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume podName:d3e6fbc7-ad9e-47a1-8592-9a22062f0845 nodeName:}" failed. No retries permitted until 2023-06-10 16:42:33.114317249 +0000 UTC m=+21.879095685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume") pod "coredns-5d78c9869d-r9sjl" (UID: "d3e6fbc7-ad9e-47a1-8592-9a22062f0845") : object "kube-system"/"coredns" not registered
	Jun 10 16:42:26 multinode-826000 kubelet[1266]: E0610 16:42:26.424267    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-r9sjl" podUID=d3e6fbc7-ad9e-47a1-8592-9a22062f0845
	Jun 10 16:42:33 multinode-826000 kubelet[1266]: I0610 16:42:33.616263    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c86056c94d3df26c2732ba843da6cb214d22264baf724bc497ce210e23d6ef"
	
	* 
	* ==> storage-provisioner [6785f017705f] <==
	* I0610 16:42:18.729602       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	* 
	* ==> storage-provisioner [e628a3dfc251] <==
	* I0610 16:41:06.540908       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:41:06.559295       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:41:06.559507       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:41:06.571970       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:41:06.573276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-826000_65f7510d-39b4-4e3d-9761-a740afd6d163!
	I0610 16:41:06.584504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"610b228d-9310-4cdc-8468-8ce5be660bed", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-826000_65f7510d-39b4-4e3d-9761-a740afd6d163 became leader
	I0610 16:41:06.674123       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-826000_65f7510d-39b4-4e3d-9761-a740afd6d163!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-826000 -n multinode-826000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-826000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (80.06s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-826000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-826000-m01 --driver=hyperkit 
multinode_test.go:452: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-826000-m01 --driver=hyperkit : (37.032565258s)
multinode_test.go:454: expected start profile command to fail. args "out/minikube-darwin-amd64 start -p multinode-826000-m01 --driver=hyperkit "
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-826000-m02 --driver=hyperkit 
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-826000-m02 --driver=hyperkit : (36.051968243s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-826000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-826000: exit status 80 (287.57971ms)

                                                
                                                
-- stdout --
	* Adding node m02 to cluster multinode-826000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-826000-m02 already exists in multinode-826000-m02 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-826000-m02
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-826000-m02: (5.251086983s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-826000 -n multinode-826000
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-826000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-826000 logs -n 25: (2.914116829s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:39 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                      |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- exec          | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | -- nslookup kubernetes.io            |                      |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- exec          | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | -- nslookup kubernetes.default       |                      |         |         |                     |                     |
	| kubectl | -p multinode-826000                  | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | -- exec  -- nslookup                 |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                      |         |         |                     |                     |
	| kubectl | -p multinode-826000 -- get pods -o   | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                      |         |         |                     |                     |
	| node    | add -p multinode-826000 -v 3         | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	| node    | multinode-826000 node stop m03       | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	| node    | multinode-826000 node start          | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	|         | m03 --alsologtostderr                |                      |         |         |                     |                     |
	| node    | list -p multinode-826000             | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT |                     |
	| stop    | -p multinode-826000                  | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT | 10 Jun 23 09:40 PDT |
	| start   | -p multinode-826000                  | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT | 10 Jun 23 09:41 PDT |
	|         | --wait=true -v=8                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	| node    | list -p multinode-826000             | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:41 PDT |                     |
	| node    | multinode-826000 node delete         | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:41 PDT |                     |
	|         | m03                                  |                      |         |         |                     |                     |
	| stop    | multinode-826000 stop                | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:41 PDT | 10 Jun 23 09:41 PDT |
	| start   | -p multinode-826000                  | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:41 PDT | 10 Jun 23 09:42 PDT |
	|         | --wait=true -v=8                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| node    | list -p multinode-826000             | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:42 PDT |                     |
	| start   | -p multinode-826000-m01              | multinode-826000-m01 | jenkins | v1.30.1 | 10 Jun 23 09:42 PDT | 10 Jun 23 09:43 PDT |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| start   | -p multinode-826000-m02              | multinode-826000-m02 | jenkins | v1.30.1 | 10 Jun 23 09:43 PDT | 10 Jun 23 09:43 PDT |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| node    | add -p multinode-826000              | multinode-826000     | jenkins | v1.30.1 | 10 Jun 23 09:43 PDT |                     |
	| delete  | -p multinode-826000-m02              | multinode-826000-m02 | jenkins | v1.30.1 | 10 Jun 23 09:43 PDT | 10 Jun 23 09:43 PDT |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:43:17
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:43:17.194684    3777 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:43:17.194856    3777 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:43:17.194860    3777 out.go:309] Setting ErrFile to fd 2...
	I0610 09:43:17.194863    3777 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:43:17.194967    3777 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:43:17.196298    3777 out.go:303] Setting JSON to false
	I0610 09:43:17.215597    3777 start.go:127] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2567,"bootTime":1686412830,"procs":394,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0610 09:43:17.215701    3777 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:43:17.236823    3777 out.go:177] * [multinode-826000-m02] minikube v1.30.1 on Darwin 13.4
	I0610 09:43:17.278752    3777 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:43:17.278758    3777 notify.go:220] Checking for updates...
	I0610 09:43:17.320843    3777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:43:17.341884    3777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 09:43:17.362691    3777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:43:17.383783    3777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	I0610 09:43:17.404841    3777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:43:17.426397    3777 config.go:182] Loaded profile config "multinode-826000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:43:17.426574    3777 config.go:182] Loaded profile config "multinode-826000-m01": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:43:17.426677    3777 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:43:17.454665    3777 out.go:177] * Using the hyperkit driver based on user configuration
	I0610 09:43:17.496768    3777 start.go:297] selected driver: hyperkit
	I0610 09:43:17.496789    3777 start.go:875] validating driver "hyperkit" against <nil>
	I0610 09:43:17.496838    3777 start.go:886] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:43:17.496955    3777 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:43:17.497126    3777 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16578-1235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 09:43:17.505469    3777 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0610 09:43:17.509034    3777 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:43:17.509050    3777 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 09:43:17.509149    3777 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:43:17.511525    3777 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0610 09:43:17.511665    3777 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 09:43:17.511685    3777 cni.go:84] Creating CNI manager for ""
	I0610 09:43:17.511696    3777 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:43:17.511701    3777 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 09:43:17.511708    3777 start_flags.go:319] config:
	{Name:multinode-826000-m02 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-826000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:43:17.512417    3777 iso.go:125] acquiring lock: {Name:mkc028968ad126cece35ec994c5f11699b30bc34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:43:17.533559    3777 out.go:177] * Starting control plane node multinode-826000-m02 in cluster multinode-826000-m02
	I0610 09:43:17.575771    3777 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:43:17.575815    3777 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0610 09:43:17.575831    3777 cache.go:57] Caching tarball of preloaded images
	I0610 09:43:17.575925    3777 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 09:43:17.575934    3777 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:43:17.576015    3777 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/config.json ...
	I0610 09:43:17.576038    3777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/config.json: {Name:mk59e1344eecc4c2f653df58e47cc12cd68fdac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:43:17.576277    3777 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:43:17.576304    3777 start.go:364] acquiring machines lock for multinode-826000-m02: {Name:mk73e5861e2a32aaad6eda5ce405a92c74d96949 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:43:17.576343    3777 start.go:368] acquired machines lock for "multinode-826000-m02" in 33.645µs
	I0610 09:43:17.576361    3777 start.go:93] Provisioning new machine with config: &{Name:multinode-826000-m02 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-826000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:43:17.576397    3777 start.go:125] createHost starting for "" (driver="hyperkit")
	I0610 09:43:17.634956    3777 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0610 09:43:17.635279    3777 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:43:17.635308    3777 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:43:17.642858    3777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51325
	I0610 09:43:17.643233    3777 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:43:17.643667    3777 main.go:141] libmachine: Using API Version  1
	I0610 09:43:17.643691    3777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:43:17.643962    3777 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:43:17.644073    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetMachineName
	I0610 09:43:17.644162    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:17.644257    3777 start.go:159] libmachine.API.Create for "multinode-826000-m02" (driver="hyperkit")
	I0610 09:43:17.644287    3777 client.go:168] LocalClient.Create starting
	I0610 09:43:17.644326    3777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem
	I0610 09:43:17.644363    3777 main.go:141] libmachine: Decoding PEM data...
	I0610 09:43:17.644377    3777 main.go:141] libmachine: Parsing certificate...
	I0610 09:43:17.644445    3777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem
	I0610 09:43:17.644467    3777 main.go:141] libmachine: Decoding PEM data...
	I0610 09:43:17.644476    3777 main.go:141] libmachine: Parsing certificate...
	I0610 09:43:17.644487    3777 main.go:141] libmachine: Running pre-create checks...
	I0610 09:43:17.644492    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .PreCreateCheck
	I0610 09:43:17.644562    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:17.644707    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetConfigRaw
	I0610 09:43:17.645105    3777 main.go:141] libmachine: Creating machine...
	I0610 09:43:17.645110    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .Create
	I0610 09:43:17.645187    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:17.645308    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | I0610 09:43:17.645183    3785 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/16578-1235/.minikube
	I0610 09:43:17.645357    3777 main.go:141] libmachine: (multinode-826000-m02) Downloading /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1235/.minikube/cache/iso/amd64/minikube-v1.30.1-1686096373-16019-amd64.iso...
	I0610 09:43:17.809626    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | I0610 09:43:17.809558    3785 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/id_rsa...
	I0610 09:43:17.951448    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | I0610 09:43:17.951363    3785 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/multinode-826000-m02.rawdisk...
	I0610 09:43:17.951458    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Writing magic tar header
	I0610 09:43:17.951466    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Writing SSH key tar header
	I0610 09:43:17.952177    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | I0610 09:43:17.952075    3785 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02 ...
	I0610 09:43:18.297931    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:18.297947    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/hyperkit.pid
	I0610 09:43:18.297988    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Using UUID e6b3e490-07ad-11ee-8000-f01898ef957c
	I0610 09:43:18.329346    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Generated MAC 42:dd:e0:74:b0:ca
	I0610 09:43:18.329358    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000-m02
	I0610 09:43:18.329392    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e6b3e490-07ad-11ee-8000-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000e0690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 09:43:18.329421    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e6b3e490-07ad-11ee-8000-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000e0690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 09:43:18.329512    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e6b3e490-07ad-11ee-8000-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/multinode-826000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/tty,log=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/bzimage,/Users/j
enkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000-m02"}
	I0610 09:43:18.329555    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e6b3e490-07ad-11ee-8000-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/multinode-826000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/tty,log=/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/bzimage,/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/mult
inode-826000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-826000-m02"
	I0610 09:43:18.329566    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 09:43:18.332029    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 DEBUG: hyperkit: Pid is 3786
	I0610 09:43:18.332414    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Attempt 0
	I0610 09:43:18.332423    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:18.332475    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | hyperkit pid from json: 3786
	I0610 09:43:18.333328    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Searching for 42:dd:e0:74:b0:ca in /var/db/dhcpd_leases ...
	I0610 09:43:18.333356    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0610 09:43:18.333379    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:72:87:2c:a3:d2:c4 ID:1,72:87:2c:a3:d2:c4 Lease:0x6485f989}
	I0610 09:43:18.333395    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:fa:20:3f:84:ae:92 ID:1,fa:20:3f:84:ae:92 Lease:0x6485f938}
	I0610 09:43:18.333403    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:32:30:6d:e9:c8:b4 ID:1,32:30:6d:e9:c8:b4 Lease:0x6484a701}
	I0610 09:43:18.333408    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:94:da:ab:ab:e2 ID:1,a6:94:da:ab:ab:e2 Lease:0x6484a6eb}
	I0610 09:43:18.333414    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:3a:96:c4:94:8e:b0 ID:1,3a:96:c4:94:8e:b0 Lease:0x6485f81d}
	I0610 09:43:18.333420    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:e6:27:b7:b3:13:83 ID:1,e6:27:b7:b3:13:83 Lease:0x6485f7f9}
	I0610 09:43:18.333425    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ea:f7:ed:fb:5e:ee ID:1,ea:f7:ed:fb:5e:ee Lease:0x6485f7ba}
	I0610 09:43:18.333431    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:c2:ab:cc:f4:2:8a ID:1,c2:ab:cc:f4:2:8a Lease:0x6485f73e}
	I0610 09:43:18.333446    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:7e:c9:b9:4e:e6:61 ID:1,7e:c9:b9:4e:e6:61 Lease:0x6485f6f5}
	I0610 09:43:18.333452    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:2a:80:59:1b:ab:5a ID:1,2a:80:59:1b:ab:5a Lease:0x6485f613}
	I0610 09:43:18.333458    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:4:36:62:66:5d ID:1,ca:4:36:62:66:5d Lease:0x6485f5e7}
	I0610 09:43:18.333463    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ca:e3:b4:f8:a0:57 ID:1,ca:e3:b4:f8:a0:57 Lease:0x6485f4b1}
	I0610 09:43:18.338610    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 09:43:18.347739    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 09:43:18.348510    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 09:43:18.348531    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 09:43:18.348542    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 09:43:18.348559    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 09:43:18.914279    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 09:43:18.914290    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 09:43:19.019249    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 09:43:19.019264    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 09:43:19.019274    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 09:43:19.019283    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 09:43:19.020131    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 09:43:19.020139    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 09:43:20.333722    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Attempt 1
	I0610 09:43:20.333734    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:20.333829    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | hyperkit pid from json: 3786
	I0610 09:43:20.334615    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Searching for 42:dd:e0:74:b0:ca in /var/db/dhcpd_leases ...
	I0610 09:43:20.334657    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0610 09:43:20.334673    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:72:87:2c:a3:d2:c4 ID:1,72:87:2c:a3:d2:c4 Lease:0x6485f989}
	I0610 09:43:20.334686    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:fa:20:3f:84:ae:92 ID:1,fa:20:3f:84:ae:92 Lease:0x6485f938}
	I0610 09:43:20.334694    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:32:30:6d:e9:c8:b4 ID:1,32:30:6d:e9:c8:b4 Lease:0x6484a701}
	I0610 09:43:20.334702    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:94:da:ab:ab:e2 ID:1,a6:94:da:ab:ab:e2 Lease:0x6484a6eb}
	I0610 09:43:20.334707    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:3a:96:c4:94:8e:b0 ID:1,3a:96:c4:94:8e:b0 Lease:0x6485f81d}
	I0610 09:43:20.334713    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:e6:27:b7:b3:13:83 ID:1,e6:27:b7:b3:13:83 Lease:0x6485f7f9}
	I0610 09:43:20.334720    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ea:f7:ed:fb:5e:ee ID:1,ea:f7:ed:fb:5e:ee Lease:0x6485f7ba}
	I0610 09:43:20.334726    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:c2:ab:cc:f4:2:8a ID:1,c2:ab:cc:f4:2:8a Lease:0x6485f73e}
	I0610 09:43:20.334731    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:7e:c9:b9:4e:e6:61 ID:1,7e:c9:b9:4e:e6:61 Lease:0x6485f6f5}
	I0610 09:43:20.334746    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:2a:80:59:1b:ab:5a ID:1,2a:80:59:1b:ab:5a Lease:0x6485f613}
	I0610 09:43:20.334756    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:4:36:62:66:5d ID:1,ca:4:36:62:66:5d Lease:0x6485f5e7}
	I0610 09:43:20.334765    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ca:e3:b4:f8:a0:57 ID:1,ca:e3:b4:f8:a0:57 Lease:0x6485f4b1}
	I0610 09:43:22.334762    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Attempt 2
	I0610 09:43:22.334774    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:22.334869    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | hyperkit pid from json: 3786
	I0610 09:43:22.335611    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Searching for 42:dd:e0:74:b0:ca in /var/db/dhcpd_leases ...
	I0610 09:43:22.335660    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0610 09:43:22.335667    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:72:87:2c:a3:d2:c4 ID:1,72:87:2c:a3:d2:c4 Lease:0x6485f989}
	I0610 09:43:22.335683    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:fa:20:3f:84:ae:92 ID:1,fa:20:3f:84:ae:92 Lease:0x6485f938}
	I0610 09:43:22.335693    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:32:30:6d:e9:c8:b4 ID:1,32:30:6d:e9:c8:b4 Lease:0x6484a701}
	I0610 09:43:22.335701    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:94:da:ab:ab:e2 ID:1,a6:94:da:ab:ab:e2 Lease:0x6484a6eb}
	I0610 09:43:22.335706    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:3a:96:c4:94:8e:b0 ID:1,3a:96:c4:94:8e:b0 Lease:0x6485f81d}
	I0610 09:43:22.335713    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:e6:27:b7:b3:13:83 ID:1,e6:27:b7:b3:13:83 Lease:0x6485f7f9}
	I0610 09:43:22.335718    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ea:f7:ed:fb:5e:ee ID:1,ea:f7:ed:fb:5e:ee Lease:0x6485f7ba}
	I0610 09:43:22.335738    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:c2:ab:cc:f4:2:8a ID:1,c2:ab:cc:f4:2:8a Lease:0x6485f73e}
	I0610 09:43:22.335747    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:7e:c9:b9:4e:e6:61 ID:1,7e:c9:b9:4e:e6:61 Lease:0x6485f6f5}
	I0610 09:43:22.335754    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:2a:80:59:1b:ab:5a ID:1,2a:80:59:1b:ab:5a Lease:0x6485f613}
	I0610 09:43:22.335759    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:4:36:62:66:5d ID:1,ca:4:36:62:66:5d Lease:0x6485f5e7}
	I0610 09:43:22.335771    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ca:e3:b4:f8:a0:57 ID:1,ca:e3:b4:f8:a0:57 Lease:0x6485f4b1}
	I0610 09:43:23.608726    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 09:43:23.608787    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 09:43:23.608802    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | 2023/06/10 09:43:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 09:43:24.336362    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Attempt 3
	I0610 09:43:24.336372    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:24.336444    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | hyperkit pid from json: 3786
	I0610 09:43:24.337193    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Searching for 42:dd:e0:74:b0:ca in /var/db/dhcpd_leases ...
	I0610 09:43:24.337247    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0610 09:43:24.337254    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:72:87:2c:a3:d2:c4 ID:1,72:87:2c:a3:d2:c4 Lease:0x6485f989}
	I0610 09:43:24.337262    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:fa:20:3f:84:ae:92 ID:1,fa:20:3f:84:ae:92 Lease:0x6485f938}
	I0610 09:43:24.337267    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:32:30:6d:e9:c8:b4 ID:1,32:30:6d:e9:c8:b4 Lease:0x6484a701}
	I0610 09:43:24.337284    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:94:da:ab:ab:e2 ID:1,a6:94:da:ab:ab:e2 Lease:0x6484a6eb}
	I0610 09:43:24.337290    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:3a:96:c4:94:8e:b0 ID:1,3a:96:c4:94:8e:b0 Lease:0x6485f81d}
	I0610 09:43:24.337295    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:e6:27:b7:b3:13:83 ID:1,e6:27:b7:b3:13:83 Lease:0x6485f7f9}
	I0610 09:43:24.337303    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ea:f7:ed:fb:5e:ee ID:1,ea:f7:ed:fb:5e:ee Lease:0x6485f7ba}
	I0610 09:43:24.337310    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:c2:ab:cc:f4:2:8a ID:1,c2:ab:cc:f4:2:8a Lease:0x6485f73e}
	I0610 09:43:24.337315    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:7e:c9:b9:4e:e6:61 ID:1,7e:c9:b9:4e:e6:61 Lease:0x6485f6f5}
	I0610 09:43:24.337326    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:2a:80:59:1b:ab:5a ID:1,2a:80:59:1b:ab:5a Lease:0x6485f613}
	I0610 09:43:24.337333    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:4:36:62:66:5d ID:1,ca:4:36:62:66:5d Lease:0x6485f5e7}
	I0610 09:43:24.337342    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ca:e3:b4:f8:a0:57 ID:1,ca:e3:b4:f8:a0:57 Lease:0x6485f4b1}
	I0610 09:43:26.337579    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Attempt 4
	I0610 09:43:26.337601    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:26.337661    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | hyperkit pid from json: 3786
	I0610 09:43:26.338407    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Searching for 42:dd:e0:74:b0:ca in /var/db/dhcpd_leases ...
	I0610 09:43:26.338463    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0610 09:43:26.338473    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:72:87:2c:a3:d2:c4 ID:1,72:87:2c:a3:d2:c4 Lease:0x6485f989}
	I0610 09:43:26.338482    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:fa:20:3f:84:ae:92 ID:1,fa:20:3f:84:ae:92 Lease:0x6485f938}
	I0610 09:43:26.338502    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:32:30:6d:e9:c8:b4 ID:1,32:30:6d:e9:c8:b4 Lease:0x6484a701}
	I0610 09:43:26.338510    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:a6:94:da:ab:ab:e2 ID:1,a6:94:da:ab:ab:e2 Lease:0x6484a6eb}
	I0610 09:43:26.338516    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:3a:96:c4:94:8e:b0 ID:1,3a:96:c4:94:8e:b0 Lease:0x6485f81d}
	I0610 09:43:26.338524    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:e6:27:b7:b3:13:83 ID:1,e6:27:b7:b3:13:83 Lease:0x6485f7f9}
	I0610 09:43:26.338530    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ea:f7:ed:fb:5e:ee ID:1,ea:f7:ed:fb:5e:ee Lease:0x6485f7ba}
	I0610 09:43:26.338537    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:c2:ab:cc:f4:2:8a ID:1,c2:ab:cc:f4:2:8a Lease:0x6485f73e}
	I0610 09:43:26.338543    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:7e:c9:b9:4e:e6:61 ID:1,7e:c9:b9:4e:e6:61 Lease:0x6485f6f5}
	I0610 09:43:26.338548    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:2a:80:59:1b:ab:5a ID:1,2a:80:59:1b:ab:5a Lease:0x6485f613}
	I0610 09:43:26.338553    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:ca:4:36:62:66:5d ID:1,ca:4:36:62:66:5d Lease:0x6485f5e7}
	I0610 09:43:26.338563    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ca:e3:b4:f8:a0:57 ID:1,ca:e3:b4:f8:a0:57 Lease:0x6485f4b1}
	I0610 09:43:28.339329    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Attempt 5
	I0610 09:43:28.339340    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:28.339447    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | hyperkit pid from json: 3786
	I0610 09:43:28.340225    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Searching for 42:dd:e0:74:b0:ca in /var/db/dhcpd_leases ...
	I0610 09:43:28.340287    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0610 09:43:28.340296    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:42:dd:e0:74:b0:ca ID:1,42:dd:e0:74:b0:ca Lease:0x6485f9af}
	I0610 09:43:28.340302    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Found match: 42:dd:e0:74:b0:ca
	I0610 09:43:28.340306    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | IP: 192.168.64.14
	I0610 09:43:28.340360    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetConfigRaw
	I0610 09:43:28.340955    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:28.341047    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:28.341134    3777 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 09:43:28.341144    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetState
	I0610 09:43:28.341221    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:28.341278    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | hyperkit pid from json: 3786
	I0610 09:43:28.342072    3777 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 09:43:28.342081    3777 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 09:43:28.342085    3777 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 09:43:28.342088    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:28.342172    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:28.342255    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.342340    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.342438    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:28.342572    3777 main.go:141] libmachine: Using SSH client type: native
	I0610 09:43:28.342940    3777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.14 22 <nil> <nil>}
	I0610 09:43:28.342944    3777 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 09:43:28.409583    3777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:43:28.409591    3777 main.go:141] libmachine: Detecting the provisioner...
	I0610 09:43:28.409596    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:28.409751    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:28.409852    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.409947    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.410021    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:28.410175    3777 main.go:141] libmachine: Using SSH client type: native
	I0610 09:43:28.410487    3777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.14 22 <nil> <nil>}
	I0610 09:43:28.410492    3777 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 09:43:28.477102    3777 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge0c6143-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0610 09:43:28.477152    3777 main.go:141] libmachine: found compatible host: buildroot
	I0610 09:43:28.477156    3777 main.go:141] libmachine: Provisioning with buildroot...
	I0610 09:43:28.477162    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetMachineName
	I0610 09:43:28.477302    3777 buildroot.go:166] provisioning hostname "multinode-826000-m02"
	I0610 09:43:28.477313    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetMachineName
	I0610 09:43:28.477432    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:28.477529    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:28.477625    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.477725    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.477830    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:28.477990    3777 main.go:141] libmachine: Using SSH client type: native
	I0610 09:43:28.478301    3777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.14 22 <nil> <nil>}
	I0610 09:43:28.478307    3777 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-826000-m02 && echo "multinode-826000-m02" | sudo tee /etc/hostname
	I0610 09:43:28.555218    3777 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-826000-m02
	
	I0610 09:43:28.555234    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:28.555367    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:28.555445    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.555530    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.555616    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:28.555750    3777 main.go:141] libmachine: Using SSH client type: native
	I0610 09:43:28.556066    3777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.14 22 <nil> <nil>}
	I0610 09:43:28.556081    3777 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-826000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-826000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-826000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:43:28.628022    3777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:43:28.628040    3777 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1235/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1235/.minikube}
	I0610 09:43:28.628055    3777 buildroot.go:174] setting up certificates
	I0610 09:43:28.628063    3777 provision.go:83] configureAuth start
	I0610 09:43:28.628072    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetMachineName
	I0610 09:43:28.628206    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetIP
	I0610 09:43:28.628292    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:28.628365    3777 provision.go:138] copyHostCerts
	I0610 09:43:28.628444    3777 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem, removing ...
	I0610 09:43:28.628450    3777 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem
	I0610 09:43:28.628577    3777 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/cert.pem (1123 bytes)
	I0610 09:43:28.628801    3777 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem, removing ...
	I0610 09:43:28.628804    3777 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem
	I0610 09:43:28.628872    3777 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/key.pem (1679 bytes)
	I0610 09:43:28.629031    3777 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem, removing ...
	I0610 09:43:28.629034    3777 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem
	I0610 09:43:28.629093    3777 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.pem (1078 bytes)
	I0610 09:43:28.629246    3777 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem org=jenkins.multinode-826000-m02 san=[192.168.64.14 192.168.64.14 localhost 127.0.0.1 minikube multinode-826000-m02]
	I0610 09:43:28.769223    3777 provision.go:172] copyRemoteCerts
	I0610 09:43:28.769284    3777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:43:28.769301    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:28.769482    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:28.769620    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.769727    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:28.769818    3777 sshutil.go:53] new ssh client: &{IP:192.168.64.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/id_rsa Username:docker}
	I0610 09:43:28.809774    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:43:28.825844    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0610 09:43:28.841905    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:43:28.858338    3777 provision.go:86] duration metric: configureAuth took 230.259942ms
	I0610 09:43:28.858347    3777 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:43:28.858491    3777 config.go:182] Loaded profile config "multinode-826000-m02": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:43:28.858503    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:28.858629    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:28.858730    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:28.858812    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.858912    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.859003    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:28.859124    3777 main.go:141] libmachine: Using SSH client type: native
	I0610 09:43:28.859426    3777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.14 22 <nil> <nil>}
	I0610 09:43:28.859431    3777 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:43:28.927892    3777 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:43:28.927900    3777 buildroot.go:70] root file system type: tmpfs
	I0610 09:43:28.927987    3777 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:43:28.927999    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:28.928150    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:28.928257    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.928371    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:28.928490    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:28.928635    3777 main.go:141] libmachine: Using SSH client type: native
	I0610 09:43:28.928944    3777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.14 22 <nil> <nil>}
	I0610 09:43:28.928988    3777 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:43:29.005429    3777 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:43:29.005454    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:29.005588    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:29.005680    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:29.005757    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:29.005851    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:29.006004    3777 main.go:141] libmachine: Using SSH client type: native
	I0610 09:43:29.006316    3777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.14 22 <nil> <nil>}
	I0610 09:43:29.006325    3777 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:43:29.561266    3777 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:43:29.561274    3777 main.go:141] libmachine: Checking connection to Docker...
	I0610 09:43:29.561280    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetURL
	I0610 09:43:29.561412    3777 main.go:141] libmachine: Docker is up and running!
	I0610 09:43:29.561417    3777 main.go:141] libmachine: Reticulating splines...
	I0610 09:43:29.561420    3777 client.go:171] LocalClient.Create took 11.917172282s
	I0610 09:43:29.561432    3777 start.go:167] duration metric: libmachine.API.Create for "multinode-826000-m02" took 11.917217211s
	I0610 09:43:29.561439    3777 start.go:300] post-start starting for "multinode-826000-m02" (driver="hyperkit")
	I0610 09:43:29.561444    3777 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:43:29.561455    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:29.561599    3777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:43:29.561610    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:29.561704    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:29.561782    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:29.561851    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:29.561921    3777 sshutil.go:53] new ssh client: &{IP:192.168.64.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/id_rsa Username:docker}
	I0610 09:43:29.603847    3777 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:43:29.606658    3777 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:43:29.606669    3777 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1235/.minikube/addons for local assets ...
	I0610 09:43:29.606762    3777 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1235/.minikube/files for local assets ...
	I0610 09:43:29.606905    3777 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem -> 16822.pem in /etc/ssl/certs
	I0610 09:43:29.607059    3777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 09:43:29.618704    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem --> /etc/ssl/certs/16822.pem (1708 bytes)
	I0610 09:43:29.640185    3777 start.go:303] post-start completed in 78.738694ms
	I0610 09:43:29.640216    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetConfigRaw
	I0610 09:43:29.640768    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetIP
	I0610 09:43:29.640911    3777 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/config.json ...
	I0610 09:43:29.641200    3777 start.go:128] duration metric: createHost completed in 12.064840692s
	I0610 09:43:29.641214    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:29.641301    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:29.641391    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:29.641469    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:29.641541    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:29.641649    3777 main.go:141] libmachine: Using SSH client type: native
	I0610 09:43:29.641942    3777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil>  [] 0s} 192.168.64.14 22 <nil> <nil>}
	I0610 09:43:29.641953    3777 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 09:43:29.709972    3777 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686415408.771590424
	
	I0610 09:43:29.709981    3777 fix.go:207] guest clock: 1686415408.771590424
	I0610 09:43:29.709988    3777 fix.go:220] Guest: 2023-06-10 09:43:28.771590424 -0700 PDT Remote: 2023-06-10 09:43:29.641206 -0700 PDT m=+12.477864509 (delta=-869.615576ms)
	I0610 09:43:29.710007    3777 fix.go:191] guest clock delta is within tolerance: -869.615576ms
	I0610 09:43:29.710010    3777 start.go:83] releasing machines lock for "multinode-826000-m02", held for 12.133705688s
	I0610 09:43:29.710028    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:29.710158    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetIP
	I0610 09:43:29.710258    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:29.710558    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:29.710668    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:29.710734    3777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:43:29.710763    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:29.710808    3777 ssh_runner.go:195] Run: cat /version.json
	I0610 09:43:29.710818    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:29.710843    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:29.710918    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:29.710932    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:29.711012    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:29.711026    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:29.711116    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:29.711131    3777 sshutil.go:53] new ssh client: &{IP:192.168.64.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/id_rsa Username:docker}
	I0610 09:43:29.711184    3777 sshutil.go:53] new ssh client: &{IP:192.168.64.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/id_rsa Username:docker}
	I0610 09:43:29.749443    3777 ssh_runner.go:195] Run: systemctl --version
	I0610 09:43:29.795977    3777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 09:43:29.800030    3777 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:43:29.800085    3777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 09:43:29.809648    3777 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:43:29.809664    3777 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:43:29.809761    3777 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:43:29.822196    3777 docker.go:633] Got preloaded images: 
	I0610 09:43:29.822202    3777 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0610 09:43:29.822253    3777 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:43:29.829128    3777 ssh_runner.go:195] Run: which lz4
	I0610 09:43:29.831706    3777 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 09:43:29.834313    3777 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:43:29.834334    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (412256110 bytes)
	I0610 09:43:31.020246    3777 docker.go:597] Took 1.188617 seconds to copy over tarball
	I0610 09:43:31.020332    3777 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 09:43:34.893913    3777 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.873580871s)
	I0610 09:43:34.893925    3777 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 09:43:34.922029    3777 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:43:34.928747    3777 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0610 09:43:34.940494    3777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:43:35.033916    3777 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:43:36.470695    3777 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.436770596s)
	I0610 09:43:36.470723    3777 start.go:481] detecting cgroup driver to use...
	I0610 09:43:36.470827    3777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:43:36.483715    3777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 09:43:36.490164    3777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:43:36.496569    3777 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:43:36.496616    3777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:43:36.504105    3777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:43:36.510647    3777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:43:36.517216    3777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:43:36.523841    3777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:43:36.530547    3777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:43:36.537072    3777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:43:36.542880    3777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:43:36.548757    3777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:43:36.631012    3777 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:43:36.643434    3777 start.go:481] detecting cgroup driver to use...
	I0610 09:43:36.643506    3777 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:43:36.657614    3777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:43:36.671979    3777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:43:36.685752    3777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:43:36.694742    3777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:43:36.703725    3777 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:43:36.734070    3777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:43:36.742419    3777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:43:36.754955    3777 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:43:36.757394    3777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:43:36.764022    3777 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:43:36.775268    3777 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:43:36.863333    3777 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:43:36.957766    3777 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:43:36.957776    3777 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:43:36.970068    3777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:43:37.053806    3777 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:43:38.433287    3777 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.379471866s)
	I0610 09:43:38.433342    3777 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:43:38.519586    3777 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 09:43:38.605904    3777 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:43:38.700162    3777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:43:38.793234    3777 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 09:43:38.808954    3777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:43:38.900778    3777 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 09:43:38.957368    3777 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 09:43:38.957464    3777 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 09:43:38.961141    3777 start.go:549] Will wait 60s for crictl version
	I0610 09:43:38.961188    3777 ssh_runner.go:195] Run: which crictl
	I0610 09:43:38.963806    3777 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 09:43:38.989868    3777 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 09:43:38.989932    3777 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:43:39.008288    3777 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:43:39.051313    3777 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 09:43:39.051418    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetIP
	I0610 09:43:39.051845    3777 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0610 09:43:39.056135    3777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:43:39.064906    3777 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:43:39.064965    3777 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:43:39.078736    3777 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:43:39.078745    3777 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:43:39.078814    3777 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:43:39.091774    3777 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:43:39.091790    3777 cache_images.go:84] Images are preloaded, skipping loading
	I0610 09:43:39.091876    3777 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 09:43:39.110761    3777 cni.go:84] Creating CNI manager for ""
	I0610 09:43:39.110772    3777 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:43:39.110785    3777 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 09:43:39.110798    3777 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.14 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-826000-m02 NodeName:multinode-826000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 09:43:39.110913    3777 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-826000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 09:43:39.110979    3777 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-826000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-826000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 09:43:39.111037    3777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 09:43:39.117397    3777 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 09:43:39.117460    3777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 09:43:39.123634    3777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0610 09:43:39.136267    3777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 09:43:39.147895    3777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0610 09:43:39.159935    3777 ssh_runner.go:195] Run: grep 192.168.64.14	control-plane.minikube.internal$ /etc/hosts
	I0610 09:43:39.162566    3777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:43:39.171206    3777 certs.go:56] Setting up /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02 for IP: 192.168.64.14
	I0610 09:43:39.171219    3777 certs.go:190] acquiring lock for shared ca certs: {Name:mk1e521581ce58a8d2ad5f887c3da11f6a7a0530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:43:39.171378    3777 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.key
	I0610 09:43:39.171430    3777 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.key
	I0610 09:43:39.171475    3777 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/client.key
	I0610 09:43:39.171484    3777 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/client.crt with IP's: []
	I0610 09:43:39.253970    3777 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/client.crt ...
	I0610 09:43:39.253980    3777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/client.crt: {Name:mk5b68b6c1d168c957db08cc06c4683798487530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:43:39.254310    3777 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/client.key ...
	I0610 09:43:39.254315    3777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/client.key: {Name:mk84383eb0c00ba00cb9be6a119394f7953936a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:43:39.254497    3777 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.key.6d3239a2
	I0610 09:43:39.254508    3777 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.crt.6d3239a2 with IP's: [192.168.64.14 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 09:43:39.321674    3777 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.crt.6d3239a2 ...
	I0610 09:43:39.321683    3777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.crt.6d3239a2: {Name:mk6f1723a5b7e15531e09f9401b0577e5d937b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:43:39.321996    3777 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.key.6d3239a2 ...
	I0610 09:43:39.322002    3777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.key.6d3239a2: {Name:mkb698c04d3caae0c2e5ebb690a74b5088e50ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:43:39.322206    3777 certs.go:337] copying /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.crt.6d3239a2 -> /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.crt
	I0610 09:43:39.322359    3777 certs.go:341] copying /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.key.6d3239a2 -> /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.key
	I0610 09:43:39.322518    3777 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/proxy-client.key
	I0610 09:43:39.322529    3777 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/proxy-client.crt with IP's: []
	I0610 09:43:39.404409    3777 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/proxy-client.crt ...
	I0610 09:43:39.404417    3777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/proxy-client.crt: {Name:mk7cc3e4b92c958bebc3d5e9871cfd02cd2e443e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:43:39.404718    3777 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/proxy-client.key ...
	I0610 09:43:39.404723    3777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/proxy-client.key: {Name:mk549bd1b74ba3e2eba92dabb71b79a44278bc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:43:39.405136    3777 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682.pem (1338 bytes)
	W0610 09:43:39.405177    3777 certs.go:433] ignoring /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682_empty.pem, impossibly tiny 0 bytes
	I0610 09:43:39.405187    3777 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 09:43:39.405219    3777 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/ca.pem (1078 bytes)
	I0610 09:43:39.405249    3777 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/cert.pem (1123 bytes)
	I0610 09:43:39.405278    3777 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/certs/key.pem (1679 bytes)
	I0610 09:43:39.405346    3777 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem (1708 bytes)
	I0610 09:43:39.405814    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 09:43:39.422566    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 09:43:39.439182    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 09:43:39.456515    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m02/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 09:43:39.472799    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 09:43:39.488944    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 09:43:39.505853    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 09:43:39.521951    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 09:43:39.537963    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 09:43:39.554645    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/certs/1682.pem --> /usr/share/ca-certificates/1682.pem (1338 bytes)
	I0610 09:43:39.570587    3777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/ssl/certs/16822.pem --> /usr/share/ca-certificates/16822.pem (1708 bytes)
	I0610 09:43:39.586578    3777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 09:43:39.597714    3777 ssh_runner.go:195] Run: openssl version
	I0610 09:43:39.602250    3777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1682.pem && ln -fs /usr/share/ca-certificates/1682.pem /etc/ssl/certs/1682.pem"
	I0610 09:43:39.608995    3777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1682.pem
	I0610 09:43:39.612086    3777 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 16:27 /usr/share/ca-certificates/1682.pem
	I0610 09:43:39.612121    3777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1682.pem
	I0610 09:43:39.615771    3777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1682.pem /etc/ssl/certs/51391683.0"
	I0610 09:43:39.622583    3777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16822.pem && ln -fs /usr/share/ca-certificates/16822.pem /etc/ssl/certs/16822.pem"
	I0610 09:43:39.629341    3777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16822.pem
	I0610 09:43:39.632298    3777 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 16:27 /usr/share/ca-certificates/16822.pem
	I0610 09:43:39.632331    3777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16822.pem
	I0610 09:43:39.635997    3777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16822.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 09:43:39.642685    3777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 09:43:39.649247    3777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:43:39.653016    3777 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:43:39.653065    3777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:43:39.656758    3777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 09:43:39.663469    3777 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 09:43:39.666100    3777 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 09:43:39.666140    3777 kubeadm.go:404] StartCluster: {Name:multinode-826000-m02 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.27.2 ClusterName:multinode-826000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.14 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:43:39.666245    3777 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:43:39.678652    3777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 09:43:39.684875    3777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 09:43:39.690774    3777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 09:43:39.696785    3777 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:43:39.696808    3777 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 09:43:39.735673    3777 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 09:43:39.735722    3777 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 09:43:39.830631    3777 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 09:43:39.830715    3777 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 09:43:39.830798    3777 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 09:43:39.951363    3777 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:43:39.972736    3777 out.go:204]   - Generating certificates and keys ...
	I0610 09:43:39.972816    3777 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 09:43:39.972863    3777 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 09:43:40.253058    3777 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 09:43:40.578006    3777 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 09:43:40.726368    3777 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 09:43:40.859622    3777 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 09:43:41.186391    3777 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 09:43:41.186478    3777 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-826000-m02] and IPs [192.168.64.14 127.0.0.1 ::1]
	I0610 09:43:41.366841    3777 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 09:43:41.367047    3777 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-826000-m02] and IPs [192.168.64.14 127.0.0.1 ::1]
	I0610 09:43:41.487025    3777 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 09:43:41.750090    3777 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 09:43:41.993996    3777 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 09:43:41.994165    3777 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:43:42.265795    3777 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:43:42.541281    3777 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:43:42.628804    3777 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:43:42.834095    3777 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:43:42.845259    3777 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:43:42.845534    3777 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:43:42.845600    3777 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 09:43:42.940901    3777 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:43:42.962288    3777 out.go:204]   - Booting up control plane ...
	I0610 09:43:42.962379    3777 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:43:42.962486    3777 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:43:42.962536    3777 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:43:42.962611    3777 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:43:42.962729    3777 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 09:43:49.947835    3777 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.001708 seconds
	I0610 09:43:49.947930    3777 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 09:43:49.958034    3777 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 09:43:50.474017    3777 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 09:43:50.474165    3777 kubeadm.go:322] [mark-control-plane] Marking the node multinode-826000-m02 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 09:43:50.981185    3777 kubeadm.go:322] [bootstrap-token] Using token: 18f9w9.ambuckkc298iw5ev
	I0610 09:43:51.018656    3777 out.go:204]   - Configuring RBAC rules ...
	I0610 09:43:51.018921    3777 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 09:43:51.021527    3777 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 09:43:51.061541    3777 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 09:43:51.063888    3777 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 09:43:51.065894    3777 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 09:43:51.068481    3777 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 09:43:51.080565    3777 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 09:43:51.242963    3777 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 09:43:51.425012    3777 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 09:43:51.425663    3777 kubeadm.go:322] 
	I0610 09:43:51.425715    3777 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 09:43:51.425718    3777 kubeadm.go:322] 
	I0610 09:43:51.425772    3777 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 09:43:51.425774    3777 kubeadm.go:322] 
	I0610 09:43:51.425796    3777 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 09:43:51.425860    3777 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 09:43:51.425916    3777 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 09:43:51.425927    3777 kubeadm.go:322] 
	I0610 09:43:51.425964    3777 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 09:43:51.425966    3777 kubeadm.go:322] 
	I0610 09:43:51.426024    3777 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 09:43:51.426028    3777 kubeadm.go:322] 
	I0610 09:43:51.426063    3777 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 09:43:51.426124    3777 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 09:43:51.426175    3777 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 09:43:51.426177    3777 kubeadm.go:322] 
	I0610 09:43:51.426251    3777 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 09:43:51.426316    3777 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 09:43:51.426318    3777 kubeadm.go:322] 
	I0610 09:43:51.426406    3777 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 18f9w9.ambuckkc298iw5ev \
	I0610 09:43:51.426487    3777 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bbecbd97dc6f81e6fad59f59c7cfd513bc3a28642154b16be7e48c15e587d7 \
	I0610 09:43:51.426505    3777 kubeadm.go:322] 	--control-plane 
	I0610 09:43:51.426507    3777 kubeadm.go:322] 
	I0610 09:43:51.426575    3777 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 09:43:51.426577    3777 kubeadm.go:322] 
	I0610 09:43:51.426640    3777 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 18f9w9.ambuckkc298iw5ev \
	I0610 09:43:51.426716    3777 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bbecbd97dc6f81e6fad59f59c7cfd513bc3a28642154b16be7e48c15e587d7 
	I0610 09:43:51.427747    3777 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 09:43:51.427874    3777 kubeadm.go:322] W0610 16:43:38.899604    1454 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:43:51.428000    3777 kubeadm.go:322] W0610 16:43:42.019840    1454 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:43:51.428011    3777 cni.go:84] Creating CNI manager for ""
	I0610 09:43:51.428019    3777 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:43:51.449706    3777 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 09:43:51.523274    3777 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 09:43:51.558940    3777 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0610 09:43:51.585553    3777 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 09:43:51.585637    3777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:43:51.585640    3777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=multinode-826000-m02 minikube.k8s.io/updated_at=2023_06_10T09_43_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:43:51.615315    3777 ops.go:34] apiserver oom_adj: -16
	I0610 09:43:51.688610    3777 kubeadm.go:1076] duration metric: took 103.047754ms to wait for elevateKubeSystemPrivileges.
	I0610 09:43:51.688627    3777 kubeadm.go:406] StartCluster complete in 12.022532461s
	I0610 09:43:51.688641    3777 settings.go:142] acquiring lock: {Name:mkb9b6482d5ac8949a51ff4918d4bb9ad74e8d46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:43:51.688714    3777 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:43:51.689498    3777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/kubeconfig: {Name:mk52bc17fccce955e53da0cb42ca8ae2dd34c214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:43:51.689732    3777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 09:43:51.689753    3777 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 09:43:51.689802    3777 addons.go:66] Setting storage-provisioner=true in profile "multinode-826000-m02"
	I0610 09:43:51.689806    3777 addons.go:66] Setting default-storageclass=true in profile "multinode-826000-m02"
	I0610 09:43:51.689816    3777 addons.go:228] Setting addon storage-provisioner=true in "multinode-826000-m02"
	I0610 09:43:51.689819    3777 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-826000-m02"
	I0610 09:43:51.689846    3777 config.go:182] Loaded profile config "multinode-826000-m02": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:43:51.689851    3777 host.go:66] Checking if "multinode-826000-m02" exists ...
	I0610 09:43:51.690107    3777 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:43:51.690110    3777 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:43:51.690118    3777 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:43:51.690119    3777 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:43:51.700408    3777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51350
	I0610 09:43:51.700423    3777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51349
	I0610 09:43:51.700750    3777 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:43:51.700768    3777 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:43:51.701106    3777 main.go:141] libmachine: Using API Version  1
	I0610 09:43:51.701115    3777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:43:51.701121    3777 main.go:141] libmachine: Using API Version  1
	I0610 09:43:51.701130    3777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:43:51.701337    3777 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:43:51.701353    3777 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:43:51.701435    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetState
	I0610 09:43:51.701514    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:51.701580    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | hyperkit pid from json: 3786
	I0610 09:43:51.701725    3777 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:43:51.701752    3777 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:43:51.708266    3777 addons.go:228] Setting addon default-storageclass=true in "multinode-826000-m02"
	I0610 09:43:51.708290    3777 host.go:66] Checking if "multinode-826000-m02" exists ...
	I0610 09:43:51.708549    3777 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:43:51.708569    3777 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:43:51.709537    3777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51353
	I0610 09:43:51.710459    3777 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:43:51.710826    3777 main.go:141] libmachine: Using API Version  1
	I0610 09:43:51.710834    3777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:43:51.711019    3777 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:43:51.711139    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetState
	I0610 09:43:51.711247    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:51.711301    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | hyperkit pid from json: 3786
	I0610 09:43:51.712229    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:51.733327    3777 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:43:51.715971    3777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51355
	I0610 09:43:51.733752    3777 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:43:51.772295    3777 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:43:51.772302    3777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 09:43:51.772317    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:51.772435    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:51.772581    3777 main.go:141] libmachine: Using API Version  1
	I0610 09:43:51.772590    3777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:43:51.772594    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:51.772719    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:51.772837    3777 sshutil.go:53] new ssh client: &{IP:192.168.64.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/id_rsa Username:docker}
	I0610 09:43:51.772854    3777 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:43:51.773223    3777 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:43:51.773253    3777 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:43:51.775553    3777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.64.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 09:43:51.780223    3777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51358
	I0610 09:43:51.780524    3777 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:43:51.780895    3777 main.go:141] libmachine: Using API Version  1
	I0610 09:43:51.780910    3777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:43:51.781126    3777 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:43:51.781235    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetState
	I0610 09:43:51.781314    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 09:43:51.781397    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | hyperkit pid from json: 3786
	I0610 09:43:51.782316    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .DriverName
	I0610 09:43:51.782464    3777 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 09:43:51.782468    3777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 09:43:51.782476    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHHostname
	I0610 09:43:51.782543    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHPort
	I0610 09:43:51.782611    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHKeyPath
	I0610 09:43:51.782693    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .GetSSHUsername
	I0610 09:43:51.782772    3777 sshutil.go:53] new ssh client: &{IP:192.168.64.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/multinode-826000-m02/id_rsa Username:docker}
	I0610 09:43:51.859568    3777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:43:51.897177    3777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 09:43:52.247794    3777 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-826000-m02" context rescaled to 1 replicas
	I0610 09:43:52.247813    3777 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.14 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:43:52.268901    3777 out.go:177] * Verifying Kubernetes components...
	I0610 09:43:52.310979    3777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:43:52.691508    3777 start.go:916] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS's ConfigMap
	I0610 09:43:52.808063    3777 main.go:141] libmachine: Making call to close driver server
	I0610 09:43:52.808078    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .Close
	I0610 09:43:52.808234    3777 main.go:141] libmachine: Making call to close driver server
	I0610 09:43:52.808247    3777 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:43:52.808254    3777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:43:52.808261    3777 main.go:141] libmachine: Making call to close driver server
	I0610 09:43:52.808265    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .Close
	I0610 09:43:52.808265    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .Close
	I0610 09:43:52.808465    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Closing plugin on server side
	I0610 09:43:52.808480    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Closing plugin on server side
	I0610 09:43:52.808502    3777 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:43:52.808509    3777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:43:52.808508    3777 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:43:52.808517    3777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:43:52.808520    3777 main.go:141] libmachine: Making call to close driver server
	I0610 09:43:52.808525    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .Close
	I0610 09:43:52.808527    3777 main.go:141] libmachine: Making call to close driver server
	I0610 09:43:52.808533    3777 main.go:141] libmachine: (multinode-826000-m02) Calling .Close
	I0610 09:43:52.808676    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Closing plugin on server side
	I0610 09:43:52.808693    3777 main.go:141] libmachine: (multinode-826000-m02) DBG | Closing plugin on server side
	I0610 09:43:52.808715    3777 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:43:52.808722    3777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:43:52.808724    3777 main.go:141] libmachine: Successfully made call to close driver server
	I0610 09:43:52.808751    3777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 09:43:52.809248    3777 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:43:52.830398    3777 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 09:43:52.830469    3777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:43:52.887290    3777 addons.go:499] enable addons completed in 1.197545085s: enabled=[default-storageclass storage-provisioner]
	I0610 09:43:52.897376    3777 api_server.go:72] duration metric: took 649.544008ms to wait for apiserver process to appear ...
	I0610 09:43:52.897386    3777 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:43:52.897402    3777 api_server.go:253] Checking apiserver healthz at https://192.168.64.14:8443/healthz ...
	I0610 09:43:52.900988    3777 api_server.go:279] https://192.168.64.14:8443/healthz returned 200:
	ok
	I0610 09:43:52.901908    3777 api_server.go:141] control plane version: v1.27.2
	I0610 09:43:52.901918    3777 api_server.go:131] duration metric: took 4.528096ms to wait for apiserver health ...
	I0610 09:43:52.901926    3777 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:43:52.907824    3777 system_pods.go:59] 5 kube-system pods found
	I0610 09:43:52.907836    3777 system_pods.go:61] "etcd-multinode-826000-m02" [91a98827-f55d-4c86-9695-7b47dcb732a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 09:43:52.907843    3777 system_pods.go:61] "kube-apiserver-multinode-826000-m02" [1c57a322-ec81-43ad-88bc-c5a390f04e25] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 09:43:52.907847    3777 system_pods.go:61] "kube-controller-manager-multinode-826000-m02" [8a45ab7a-f91a-4496-aa31-ec8dd0606ae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 09:43:52.907851    3777 system_pods.go:61] "kube-scheduler-multinode-826000-m02" [5cfa39a5-0d9e-4879-8475-6e7312606018] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 09:43:52.907854    3777 system_pods.go:61] "storage-provisioner" [3a9f7c75-22bf-4dc7-89ed-5e7032aa869d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0610 09:43:52.907857    3777 system_pods.go:74] duration metric: took 5.928995ms to wait for pod list to return data ...
	I0610 09:43:52.907862    3777 kubeadm.go:581] duration metric: took 660.033174ms to wait for : map[apiserver:true system_pods:true] ...
	I0610 09:43:52.907870    3777 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:43:52.909784    3777 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0610 09:43:52.909796    3777 node_conditions.go:123] node cpu capacity is 2
	I0610 09:43:52.909804    3777 node_conditions.go:105] duration metric: took 1.931541ms to run NodePressure ...
	I0610 09:43:52.909810    3777 start.go:228] waiting for startup goroutines ...
	I0610 09:43:52.909813    3777 start.go:233] waiting for cluster config update ...
	I0610 09:43:52.909820    3777 start.go:242] writing updated cluster config ...
	I0610 09:43:52.910145    3777 ssh_runner.go:195] Run: rm -f paused
	I0610 09:43:52.948454    3777 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 09:43:52.985386    3777 out.go:177] 
	W0610 09:43:53.006617    3777 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 09:43:53.027327    3777 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 09:43:53.122250    3777 out.go:177] * Done! kubectl is now configured to use "multinode-826000-m02" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:41:28 UTC, ends at Sat 2023-06-10 16:43:59 UTC. --
	Jun 10 16:42:18 multinode-826000 dockerd[828]: time="2023-06-10T16:42:18.409192268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:18 multinode-826000 dockerd[828]: time="2023-06-10T16:42:18.409219071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:42:18 multinode-826000 dockerd[828]: time="2023-06-10T16:42:18.409235270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:20 multinode-826000 cri-dockerd[1030]: time="2023-06-10T16:42:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b42123042975e0a2733d510ff5b7dff436088ae55c7330fdf05be6f5d7d18795/resolv.conf as [nameserver 192.168.64.1]"
	Jun 10 16:42:20 multinode-826000 dockerd[828]: time="2023-06-10T16:42:20.549255559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:42:20 multinode-826000 dockerd[828]: time="2023-06-10T16:42:20.549317273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:20 multinode-826000 dockerd[828]: time="2023-06-10T16:42:20.549341378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:42:20 multinode-826000 dockerd[828]: time="2023-06-10T16:42:20.549352886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.312762117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.312808662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.312823461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.312832950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:33 multinode-826000 cri-dockerd[1030]: time="2023-06-10T16:42:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7c86056c94d3df26c2732ba843da6cb214d22264baf724bc497ce210e23d6ef/resolv.conf as [nameserver 192.168.64.1]"
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.687274491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.687428685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.687505429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:42:33 multinode-826000 dockerd[828]: time="2023-06-10T16:42:33.687562435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:42:48 multinode-826000 dockerd[822]: time="2023-06-10T16:42:48.745902829Z" level=info msg="ignoring event" container=6785f017705fb0ff8ff001be05a8d805bfd54959faa364f4ad662907f7735d9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:42:48 multinode-826000 dockerd[828]: time="2023-06-10T16:42:48.746290774Z" level=info msg="shim disconnected" id=6785f017705fb0ff8ff001be05a8d805bfd54959faa364f4ad662907f7735d9f namespace=moby
	Jun 10 16:42:48 multinode-826000 dockerd[828]: time="2023-06-10T16:42:48.746794242Z" level=warning msg="cleaning up after shim disconnected" id=6785f017705fb0ff8ff001be05a8d805bfd54959faa364f4ad662907f7735d9f namespace=moby
	Jun 10 16:42:48 multinode-826000 dockerd[828]: time="2023-06-10T16:42:48.746839006Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:43:02 multinode-826000 dockerd[828]: time="2023-06-10T16:43:02.462611903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:43:02 multinode-826000 dockerd[828]: time="2023-06-10T16:43:02.462785940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:43:02 multinode-826000 dockerd[828]: time="2023-06-10T16:43:02.462825095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:43:02 multinode-826000 dockerd[828]: time="2023-06-10T16:43:02.462850442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID
	55e30aa3e039b       6e38f40d628db                                                                              57 seconds ago       Running             storage-provisioner       2                   ebb2772ed9c64
	665d2bfd37808       ead0a4a53df89                                                                              About a minute ago   Running             coredns                   1                   d7c86056c94d3
	45d0df95b7154       b0b1fa0f58c6e                                                                              About a minute ago   Running             kindnet-cni               1                   b42123042975e
	6785f017705fb       6e38f40d628db                                                                              About a minute ago   Exited              storage-provisioner       1                   ebb2772ed9c64
	5cd149e6a33f9       b8aa50768fd67                                                                              About a minute ago   Running             kube-proxy                1                   f9883f5613a3b
	b6511a7a9032c       86b6af7dd652c                                                                              About a minute ago   Running             etcd                      1                   e6d149ccc12e3
	8ba9a16fd0bb2       89e70da428d29                                                                              About a minute ago   Running             kube-scheduler            1                   bc491bac713bd
	1a20ece454029       ac2b7465ebba9                                                                              About a minute ago   Running             kube-controller-manager   1                   e1cb83b607e86
	492eebc8d7c90       c5b13e4f7806d                                                                              About a minute ago   Running             kube-apiserver            1                   5e4eac218705e
	12619bc2bf572       ead0a4a53df89                                                                              2 minutes ago        Exited              coredns                   0                   2494e4985fe38
	dcf36c339d8e9       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974   2 minutes ago        Exited              kindnet-cni               0                   fe54448abb1ac
	3246cc4a932c7       b8aa50768fd67                                                                              3 minutes ago        Exited              kube-proxy                0                   f4c3162aaa5c0
	ba32349cda752       86b6af7dd652c                                                                              3 minutes ago        Exited              etcd                      0                   2d94b625d191b
	c0054420e3b8f       89e70da428d29                                                                              3 minutes ago        Exited              kube-scheduler            0                   1e876d1d39ca0
	ae72b9818103a       ac2b7465ebba9                                                                              3 minutes ago        Exited              kube-controller-manager   0                   8f3a0f3eaddd1
	0a2f2c979d7b0       c5b13e4f7806d                                                                              3 minutes ago        Exited              kube-apiserver            0                   2023590fd394b
	
	* 
	* ==> coredns [12619bc2bf57] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55077 - 62156 "HINFO IN 783487967199058609.7483377405974833132. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.004630964s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [665d2bfd3780] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60235 - 30170 "HINFO IN 7061034121563959463.7423390287495701571. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004674814s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-826000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-826000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=multinode-826000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_40_43_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:40:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-826000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:43:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:42:26 +0000   Sat, 10 Jun 2023 16:40:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:42:26 +0000   Sat, 10 Jun 2023 16:40:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:42:26 +0000   Sat, 10 Jun 2023 16:40:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:42:26 +0000   Sat, 10 Jun 2023 16:42:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.12
	  Hostname:    multinode-826000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 549be7735a0542d0a254ccc3bb88af35
	  System UUID:                39eb11ee-0000-0000-b579-f01898ef957c
	  Boot ID:                    f1a567ca-36de-47f1-bba2-37d393f013e9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-r9sjl                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m5s
	  kube-system                 etcd-multinode-826000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m20s
	  kube-system                 kindnet-9r8df                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m5s
	  kube-system                 kube-apiserver-multinode-826000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 kube-controller-manager-multinode-826000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 kube-proxy-7dxj9                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 kube-scheduler-multinode-826000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  NodeHasSufficientPID     3m18s                kubelet          Node multinode-826000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m18s                kubelet          Node multinode-826000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s                kubelet          Node multinode-826000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 3m18s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           3m5s                 node-controller  Node multinode-826000 event: Registered Node multinode-826000 in Controller
	  Normal  NodeReady                2m55s                kubelet          Node multinode-826000 status is now: NodeReady
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s (x8 over 109s)  kubelet          Node multinode-826000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x8 over 109s)  kubelet          Node multinode-826000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x7 over 109s)  kubelet          Node multinode-826000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                  node-controller  Node multinode-826000 event: Registered Node multinode-826000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.027851] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +4.591261] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.255545] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.040456] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.894050] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +26.539148] systemd-fstab-generator[522]: Ignoring "noauto" for root device
	[  +0.080871] systemd-fstab-generator[533]: Ignoring "noauto" for root device
	[  +0.787752] systemd-fstab-generator[750]: Ignoring "noauto" for root device
	[  +0.214988] systemd-fstab-generator[789]: Ignoring "noauto" for root device
	[  +0.086105] systemd-fstab-generator[800]: Ignoring "noauto" for root device
	[  +0.094774] systemd-fstab-generator[813]: Ignoring "noauto" for root device
	[  +1.354206] systemd-fstab-generator[975]: Ignoring "noauto" for root device
	[  +0.089485] systemd-fstab-generator[986]: Ignoring "noauto" for root device
	[  +0.096423] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +0.091733] systemd-fstab-generator[1008]: Ignoring "noauto" for root device
	[  +0.098751] systemd-fstab-generator[1022]: Ignoring "noauto" for root device
	[Jun10 16:42] systemd-fstab-generator[1260]: Ignoring "noauto" for root device
	[  +0.239018] kauditd_printk_skb: 67 callbacks suppressed
	[ +17.550402] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [b6511a7a9032] <==
	* {"level":"info","ts":"2023-06-10T16:42:13.481Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T16:42:13.481Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T16:42:13.481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 switched to configuration voters=(9888510509761246144)"}
	{"level":"info","ts":"2023-06-10T16:42:13.481Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","added-peer-id":"893b0beac40933c0","added-peer-peer-urls":["https://192.168.64.12:2380"]}
	{"level":"info","ts":"2023-06-10T16:42:13.481Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:42:13.482Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:42:13.485Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-10T16:42:13.486Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-06-10T16:42:13.486Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"893b0beac40933c0","initial-advertise-peer-urls":["https://192.168.64.12:2380"],"listen-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.12:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-10T16:42:13.486Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-10T16:42:13.486Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-06-10T16:42:15.371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 is starting a new election at term 2"}
	{"level":"info","ts":"2023-06-10T16:42:15.371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:42:15.371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgPreVoteResp from 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-06-10T16:42:15.371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became candidate at term 3"}
	{"level":"info","ts":"2023-06-10T16:42:15.372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgVoteResp from 893b0beac40933c0 at term 3"}
	{"level":"info","ts":"2023-06-10T16:42:15.372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became leader at term 3"}
	{"level":"info","ts":"2023-06-10T16:42:15.372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 893b0beac40933c0 elected leader 893b0beac40933c0 at term 3"}
	{"level":"info","ts":"2023-06-10T16:42:15.374Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"893b0beac40933c0","local-member-attributes":"{Name:multinode-826000 ClientURLs:[https://192.168.64.12:2379]}","request-path":"/0/members/893b0beac40933c0/attributes","cluster-id":"51ecae2d8304f353","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:42:15.374Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:42:15.374Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:42:15.374Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:42:15.374Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:42:15.375Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.12:2379"}
	{"level":"info","ts":"2023-06-10T16:42:15.375Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [ba32349cda75] <==
	* {"level":"info","ts":"2023-06-10T16:40:37.818Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgPreVoteResp from 893b0beac40933c0 at term 1"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 received MsgVoteResp from 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"893b0beac40933c0 became leader at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 893b0beac40933c0 elected leader 893b0beac40933c0 at term 2"}
	{"level":"info","ts":"2023-06-10T16:40:37.987Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"893b0beac40933c0","local-member-attributes":"{Name:multinode-826000 ClientURLs:[https://192.168.64.12:2379]}","request-path":"/0/members/893b0beac40933c0/attributes","cluster-id":"51ecae2d8304f353","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:40:37.987Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:40:37.990Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T16:40:37.990Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:40:37.991Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.12:2379"}
	{"level":"info","ts":"2023-06-10T16:40:37.991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:40:37.994Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"51ecae2d8304f353","local-member-id":"893b0beac40933c0","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:40:38.000Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:41:11.906Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-06-10T16:41:11.906Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"multinode-826000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"]}
	{"level":"info","ts":"2023-06-10T16:41:11.914Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"893b0beac40933c0","current-leader-member-id":"893b0beac40933c0"}
	{"level":"info","ts":"2023-06-10T16:41:11.915Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-06-10T16:41:11.916Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.64.12:2380"}
	{"level":"info","ts":"2023-06-10T16:41:11.916Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"multinode-826000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.12:2380"],"advertise-client-urls":["https://192.168.64.12:2379"]}
	
	* 
	* ==> kernel <==
	*  16:44:00 up 2 min,  0 users,  load average: 0.71, 0.24, 0.08
	Linux multinode-826000 5.10.57 #1 SMP Wed Jun 7 04:45:40 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [45d0df95b715] <==
	* I0610 16:42:20.820122       1 main.go:107] hostIP = 192.168.64.12
	podIP = 192.168.64.12
	I0610 16:42:20.820364       1 main.go:116] setting mtu 1500 for CNI 
	I0610 16:42:20.820391       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 16:42:20.820426       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 16:42:21.118301       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:42:21.118339       1 main.go:227] handling current node
	I0610 16:42:31.126425       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:42:31.126471       1 main.go:227] handling current node
	I0610 16:42:41.138405       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:42:41.138571       1 main.go:227] handling current node
	I0610 16:42:51.141778       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:42:51.141858       1 main.go:227] handling current node
	I0610 16:43:01.149317       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:43:01.149352       1 main.go:227] handling current node
	I0610 16:43:11.152876       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:43:11.152954       1 main.go:227] handling current node
	I0610 16:43:21.156313       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:43:21.156379       1 main.go:227] handling current node
	I0610 16:43:31.161339       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:43:31.161539       1 main.go:227] handling current node
	I0610 16:43:41.166174       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:43:41.166211       1 main.go:227] handling current node
	I0610 16:43:51.169001       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:43:51.169060       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [dcf36c339d8e] <==
	* I0610 16:41:02.415980       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0610 16:41:02.416124       1 main.go:107] hostIP = 192.168.64.12
	podIP = 192.168.64.12
	I0610 16:41:02.416216       1 main.go:116] setting mtu 1500 for CNI 
	I0610 16:41:02.416259       1 main.go:146] kindnetd IP family: "ipv4"
	I0610 16:41:02.416285       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 16:41:02.724012       1 main.go:223] Handling node with IPs: map[192.168.64.12:{}]
	I0610 16:41:02.724049       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [0a2f2c979d7b] <==
	* W0610 16:41:11.911740       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 16:41:11.911749       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 16:41:11.911767       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	I0610 16:41:11.987257       1 controller.go:228] Shutting down kubernetes service endpoint reconciler
	
	* 
	* ==> kube-apiserver [492eebc8d7c9] <==
	* I0610 16:42:16.420434       1 naming_controller.go:291] Starting NamingConditionController
	I0610 16:42:16.420480       1 establishing_controller.go:76] Starting EstablishingController
	I0610 16:42:16.420513       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0610 16:42:16.420537       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0610 16:42:16.420583       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0610 16:42:16.455560       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0610 16:42:16.456736       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0610 16:42:16.472320       1 shared_informer.go:318] Caches are synced for configmaps
	I0610 16:42:16.475984       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:42:16.476186       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0610 16:42:16.476214       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0610 16:42:16.476553       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 16:42:16.479358       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 16:42:16.482584       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 16:42:16.483838       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0610 16:42:16.546063       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0610 16:42:17.138175       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:42:17.376939       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 16:42:19.159598       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 16:42:19.246314       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 16:42:19.251533       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 16:42:19.286774       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:42:19.291252       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 16:42:28.948888       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 16:42:28.960858       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [1a20ece45402] <==
	* I0610 16:42:28.949717       1 shared_informer.go:318] Caches are synced for PV protection
	I0610 16:42:28.952799       1 shared_informer.go:318] Caches are synced for ephemeral
	I0610 16:42:28.954075       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0610 16:42:28.959193       1 shared_informer.go:318] Caches are synced for cronjob
	I0610 16:42:28.964799       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 16:42:28.970251       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 16:42:28.970345       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 16:42:28.974760       1 shared_informer.go:318] Caches are synced for taint
	I0610 16:42:28.974902       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0610 16:42:28.975136       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-826000"
	I0610 16:42:28.975188       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0610 16:42:28.974928       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0610 16:42:28.975439       1 taint_manager.go:211] "Sending events to api server"
	I0610 16:42:28.975620       1 event.go:307] "Event occurred" object="multinode-826000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-826000 event: Registered Node multinode-826000 in Controller"
	I0610 16:42:28.977173       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0610 16:42:28.978342       1 shared_informer.go:318] Caches are synced for stateful set
	I0610 16:42:29.003625       1 shared_informer.go:318] Caches are synced for HPA
	I0610 16:42:29.036274       1 shared_informer.go:318] Caches are synced for deployment
	I0610 16:42:29.049004       1 shared_informer.go:318] Caches are synced for disruption
	I0610 16:42:29.051106       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:42:29.071688       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:42:29.088150       1 shared_informer.go:318] Caches are synced for attach detach
	I0610 16:42:29.480175       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:42:29.480197       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0610 16:42:29.486250       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [ae72b9818103] <==
	* I0610 16:40:55.682025       1 shared_informer.go:318] Caches are synced for PVC protection
	I0610 16:40:55.682087       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0610 16:40:55.682463       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0610 16:40:55.682528       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0610 16:40:55.682969       1 shared_informer.go:318] Caches are synced for job
	I0610 16:40:55.685211       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0610 16:40:55.687715       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0610 16:40:55.717624       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-zhp88"
	I0610 16:40:55.749857       1 shared_informer.go:318] Caches are synced for attach detach
	I0610 16:40:55.751495       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-r9sjl"
	I0610 16:40:55.772147       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:40:55.830662       1 shared_informer.go:318] Caches are synced for taint
	I0610 16:40:55.830803       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0610 16:40:55.830866       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0610 16:40:55.830888       1 taint_manager.go:211] "Sending events to api server"
	I0610 16:40:55.831285       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-826000"
	I0610 16:40:55.831308       1 node_lifecycle_controller.go:1027] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0610 16:40:55.831403       1 event.go:307] "Event occurred" object="multinode-826000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-826000 event: Registered Node multinode-826000 in Controller"
	I0610 16:40:55.842686       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:40:55.896813       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0610 16:40:55.933788       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-zhp88"
	I0610 16:40:56.194791       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:40:56.231277       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:40:56.231338       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0610 16:41:05.833553       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	* 
	* ==> kube-proxy [3246cc4a932c] <==
	* I0610 16:40:57.738817       1 node.go:141] Successfully retrieved node IP: 192.168.64.12
	I0610 16:40:57.738885       1 server_others.go:110] "Detected node IP" address="192.168.64.12"
	I0610 16:40:57.738899       1 server_others.go:551] "Using iptables proxy"
	I0610 16:40:57.763801       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:40:57.763883       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:40:57.764218       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:40:57.764791       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:40:57.764844       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:40:57.766097       1 config.go:188] "Starting service config controller"
	I0610 16:40:57.766529       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:40:57.767401       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:40:57.767453       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:40:57.766609       1 config.go:315] "Starting node config controller"
	I0610 16:40:57.768401       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:40:57.867666       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:40:57.867833       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:40:57.868467       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [5cd149e6a33f] <==
	* I0610 16:42:18.493700       1 node.go:141] Successfully retrieved node IP: 192.168.64.12
	I0610 16:42:18.493981       1 server_others.go:110] "Detected node IP" address="192.168.64.12"
	I0610 16:42:18.494277       1 server_others.go:551] "Using iptables proxy"
	I0610 16:42:18.767151       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:42:18.767185       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:42:18.767499       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:42:18.768341       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:42:18.768371       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:42:18.770179       1 config.go:188] "Starting service config controller"
	I0610 16:42:18.770461       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:42:18.770809       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:42:18.770836       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:42:18.773322       1 config.go:315] "Starting node config controller"
	I0610 16:42:18.773349       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:42:18.871310       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:42:18.871449       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:42:18.873434       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8ba9a16fd0bb] <==
	* I0610 16:42:14.458749       1 serving.go:348] Generated self-signed cert in-memory
	W0610 16:42:16.436078       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 16:42:16.436188       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 16:42:16.436234       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 16:42:16.436250       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 16:42:16.461624       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0610 16:42:16.461709       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:42:16.464450       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0610 16:42:16.465341       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 16:42:16.465615       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 16:42:16.468929       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 16:42:16.565769       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c0054420e3b8] <==
	* E0610 16:40:39.958938       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 16:40:39.959025       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:40:39.959136       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 16:40:39.959229       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 16:40:39.959323       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 16:40:39.959392       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:40:39.959472       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:40:39.959730       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 16:40:39.959782       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 16:40:39.959898       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:40:39.959973       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 16:40:39.960084       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:40:39.960134       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 16:40:40.792534       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:40:40.792622       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 16:40:40.962518       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:40:40.962536       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:40:40.981767       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 16:40:40.981852       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 16:40:41.339329       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 16:41:11.929239       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0610 16:41:11.929285       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0610 16:41:11.929426       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0610 16:41:11.929712       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0610 16:41:11.929767       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:41:28 UTC, ends at Sat 2023-06-10 16:44:01 UTC. --
	Jun 10 16:42:17 multinode-826000 kubelet[1266]: E0610 16:42:17.552408    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume podName:d3e6fbc7-ad9e-47a1-8592-9a22062f0845 nodeName:}" failed. No retries permitted until 2023-06-10 16:42:18.052395876 +0000 UTC m=+6.817174293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume") pod "coredns-5d78c9869d-r9sjl" (UID: "d3e6fbc7-ad9e-47a1-8592-9a22062f0845") : object "kube-system"/"coredns" not registered
	Jun 10 16:42:18 multinode-826000 kubelet[1266]: E0610 16:42:18.057609    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 10 16:42:18 multinode-826000 kubelet[1266]: E0610 16:42:18.057660    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume podName:d3e6fbc7-ad9e-47a1-8592-9a22062f0845 nodeName:}" failed. No retries permitted until 2023-06-10 16:42:19.057649321 +0000 UTC m=+7.822427738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume") pod "coredns-5d78c9869d-r9sjl" (UID: "d3e6fbc7-ad9e-47a1-8592-9a22062f0845") : object "kube-system"/"coredns" not registered
	Jun 10 16:42:18 multinode-826000 kubelet[1266]: E0610 16:42:18.424598    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-r9sjl" podUID=d3e6fbc7-ad9e-47a1-8592-9a22062f0845
	Jun 10 16:42:19 multinode-826000 kubelet[1266]: E0610 16:42:19.064548    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 10 16:42:19 multinode-826000 kubelet[1266]: E0610 16:42:19.064630    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume podName:d3e6fbc7-ad9e-47a1-8592-9a22062f0845 nodeName:}" failed. No retries permitted until 2023-06-10 16:42:21.064619874 +0000 UTC m=+9.829398292 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume") pod "coredns-5d78c9869d-r9sjl" (UID: "d3e6fbc7-ad9e-47a1-8592-9a22062f0845") : object "kube-system"/"coredns" not registered
	Jun 10 16:42:20 multinode-826000 kubelet[1266]: E0610 16:42:20.460133    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-r9sjl" podUID=d3e6fbc7-ad9e-47a1-8592-9a22062f0845
	Jun 10 16:42:20 multinode-826000 kubelet[1266]: I0610 16:42:20.460162    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42123042975e0a2733d510ff5b7dff436088ae55c7330fdf05be6f5d7d18795"
	Jun 10 16:42:21 multinode-826000 kubelet[1266]: E0610 16:42:21.080306    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 10 16:42:21 multinode-826000 kubelet[1266]: E0610 16:42:21.080395    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume podName:d3e6fbc7-ad9e-47a1-8592-9a22062f0845 nodeName:}" failed. No retries permitted until 2023-06-10 16:42:25.080381955 +0000 UTC m=+13.845160380 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume") pod "coredns-5d78c9869d-r9sjl" (UID: "d3e6fbc7-ad9e-47a1-8592-9a22062f0845") : object "kube-system"/"coredns" not registered
	Jun 10 16:42:21 multinode-826000 kubelet[1266]: E0610 16:42:21.503044    1266 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Jun 10 16:42:22 multinode-826000 kubelet[1266]: E0610 16:42:22.424403    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-r9sjl" podUID=d3e6fbc7-ad9e-47a1-8592-9a22062f0845
	Jun 10 16:42:24 multinode-826000 kubelet[1266]: E0610 16:42:24.424780    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-r9sjl" podUID=d3e6fbc7-ad9e-47a1-8592-9a22062f0845
	Jun 10 16:42:25 multinode-826000 kubelet[1266]: E0610 16:42:25.114004    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 10 16:42:25 multinode-826000 kubelet[1266]: E0610 16:42:25.114335    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume podName:d3e6fbc7-ad9e-47a1-8592-9a22062f0845 nodeName:}" failed. No retries permitted until 2023-06-10 16:42:33.114317249 +0000 UTC m=+21.879095685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3e6fbc7-ad9e-47a1-8592-9a22062f0845-config-volume") pod "coredns-5d78c9869d-r9sjl" (UID: "d3e6fbc7-ad9e-47a1-8592-9a22062f0845") : object "kube-system"/"coredns" not registered
	Jun 10 16:42:26 multinode-826000 kubelet[1266]: E0610 16:42:26.424267    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-r9sjl" podUID=d3e6fbc7-ad9e-47a1-8592-9a22062f0845
	Jun 10 16:42:33 multinode-826000 kubelet[1266]: I0610 16:42:33.616263    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c86056c94d3df26c2732ba843da6cb214d22264baf724bc497ce210e23d6ef"
	Jun 10 16:42:49 multinode-826000 kubelet[1266]: I0610 16:42:49.729201    1266 scope.go:115] "RemoveContainer" containerID="e628a3dfc251b0a694b13456e12370d4a20feded1f77aa2dfb81b98ccec94221"
	Jun 10 16:42:49 multinode-826000 kubelet[1266]: I0610 16:42:49.729446    1266 scope.go:115] "RemoveContainer" containerID="6785f017705fb0ff8ff001be05a8d805bfd54959faa364f4ad662907f7735d9f"
	Jun 10 16:42:49 multinode-826000 kubelet[1266]: E0610 16:42:49.729564    1266 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(045816f3-b7b8-4909-8dc7-42d6d795adb1)\"" pod="kube-system/storage-provisioner" podUID=045816f3-b7b8-4909-8dc7-42d6d795adb1
	Jun 10 16:43:02 multinode-826000 kubelet[1266]: I0610 16:43:02.424426    1266 scope.go:115] "RemoveContainer" containerID="6785f017705fb0ff8ff001be05a8d805bfd54959faa364f4ad662907f7735d9f"
	Jun 10 16:43:11 multinode-826000 kubelet[1266]: E0610 16:43:11.443801    1266 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:43:11 multinode-826000 kubelet[1266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:43:11 multinode-826000 kubelet[1266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:43:11 multinode-826000 kubelet[1266]:  > table=nat chain=KUBE-KUBELET-CANARY
	
	* 
	* ==> storage-provisioner [55e30aa3e039] <==
	* I0610 16:43:02.511445       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:43:02.522373       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:43:02.522603       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:43:19.913555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:43:19.913962       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"610b228d-9310-4cdc-8468-8ce5be660bed", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-826000_8b37f3d8-9246-4fc0-b749-98bf404fe79e became leader
	I0610 16:43:19.914555       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-826000_8b37f3d8-9246-4fc0-b749-98bf404fe79e!
	I0610 16:43:20.015179       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-826000_8b37f3d8-9246-4fc0-b749-98bf404fe79e!
	
	* 
	* ==> storage-provisioner [6785f017705f] <==
	* I0610 16:42:18.729602       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0610 16:42:48.736113       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-826000 -n multinode-826000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-826000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (82.00s)

                                                
                                    

Test pass (285/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 23.79
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.27.2/json-events 20.22
11 TestDownloadOnly/v1.27.2/preload-exists 0
14 TestDownloadOnly/v1.27.2/kubectl 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.28
16 TestDownloadOnly/DeleteAll 0.38
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.35
19 TestBinaryMirror 0.96
20 TestOffline 55.81
22 TestAddons/Setup 201.53
24 TestAddons/parallel/Registry 15.12
25 TestAddons/parallel/Ingress 20.44
26 TestAddons/parallel/InspektorGadget 10.41
27 TestAddons/parallel/MetricsServer 5.44
28 TestAddons/parallel/HelmTiller 15.9
30 TestAddons/parallel/CSI 43.57
31 TestAddons/parallel/Headlamp 13.28
32 TestAddons/parallel/CloudSpanner 5.29
35 TestAddons/serial/GCPAuth/Namespaces 0.1
36 TestAddons/StoppedEnableDisable 5.67
37 TestCertOptions 43.83
38 TestCertExpiration 242.41
39 TestDockerFlags 46.48
40 TestForceSystemdFlag 41.84
41 TestForceSystemdEnv 39.05
43 TestHyperKitDriverInstallOrUpdate 7.03
46 TestErrorSpam/setup 33.92
47 TestErrorSpam/start 1.26
48 TestErrorSpam/status 0.42
49 TestErrorSpam/pause 1.16
50 TestErrorSpam/unpause 1.22
51 TestErrorSpam/stop 5.64
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 51.02
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 37.11
58 TestFunctional/serial/KubeContext 0.04
59 TestFunctional/serial/KubectlGetPods 0.05
62 TestFunctional/serial/CacheCmd/cache/add_remote 6.45
63 TestFunctional/serial/CacheCmd/cache/add_local 1.36
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
65 TestFunctional/serial/CacheCmd/cache/list 0.06
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.16
67 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
68 TestFunctional/serial/CacheCmd/cache/delete 0.13
69 TestFunctional/serial/MinikubeKubectlCmd 0.53
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.71
71 TestFunctional/serial/ExtraConfig 39.49
72 TestFunctional/serial/ComponentHealth 0.05
73 TestFunctional/serial/LogsCmd 2.92
74 TestFunctional/serial/LogsFileCmd 2.65
76 TestFunctional/parallel/ConfigCmd 0.38
77 TestFunctional/parallel/DashboardCmd 12.61
78 TestFunctional/parallel/DryRun 1.08
79 TestFunctional/parallel/InternationalLanguage 0.75
80 TestFunctional/parallel/StatusCmd 0.47
84 TestFunctional/parallel/ServiceCmdConnect 7.55
85 TestFunctional/parallel/AddonsCmd 0.22
86 TestFunctional/parallel/PersistentVolumeClaim 26.02
88 TestFunctional/parallel/SSHCmd 0.28
89 TestFunctional/parallel/CpCmd 0.58
90 TestFunctional/parallel/MySQL 27.22
91 TestFunctional/parallel/FileSync 0.21
92 TestFunctional/parallel/CertSync 1.08
96 TestFunctional/parallel/NodeLabels 0.09
98 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
100 TestFunctional/parallel/License 0.83
101 TestFunctional/parallel/Version/short 0.08
102 TestFunctional/parallel/Version/components 0.45
103 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
104 TestFunctional/parallel/ImageCommands/ImageListTable 0.15
105 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
106 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
107 TestFunctional/parallel/ImageCommands/ImageBuild 3.17
108 TestFunctional/parallel/ImageCommands/Setup 3.29
109 TestFunctional/parallel/DockerEnv/bash 0.72
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
113 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.18
114 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.15
115 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.91
116 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.19
117 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
118 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.28
119 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.36
120 TestFunctional/parallel/ServiceCmd/DeployApp 14.14
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.39
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.13
126 TestFunctional/parallel/ServiceCmd/List 0.37
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
129 TestFunctional/parallel/ServiceCmd/Format 0.23
130 TestFunctional/parallel/ServiceCmd/URL 0.24
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
135 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
138 TestFunctional/parallel/ProfileCmd/profile_list 0.26
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
140 TestFunctional/parallel/MountCmd/any-port 6.96
141 TestFunctional/parallel/MountCmd/specific-port 1.45
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.34
143 TestFunctional/delete_addon-resizer_images 0.13
144 TestFunctional/delete_my-image_image 0.05
145 TestFunctional/delete_minikube_cached_images 0.05
149 TestImageBuild/serial/Setup 39.02
150 TestImageBuild/serial/NormalBuild 2.22
151 TestImageBuild/serial/BuildWithBuildArg 0.66
152 TestImageBuild/serial/BuildWithDockerIgnore 0.21
153 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.19
156 TestIngressAddonLegacy/StartLegacyK8sCluster 95.09
158 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.22
159 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.47
160 TestIngressAddonLegacy/serial/ValidateIngressAddons 30.94
163 TestJSONOutput/start/Command 52.18
164 TestJSONOutput/start/Audit 0
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/pause/Command 0.47
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.45
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 8.16
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.69
191 TestMainNoArgs 0.06
192 TestMinikubeProfile 84.06
195 TestMountStart/serial/StartWithMountFirst 19.31
196 TestMountStart/serial/VerifyMountFirst 0.29
197 TestMountStart/serial/StartWithMountSecond 18.96
198 TestMountStart/serial/VerifyMountSecond 0.29
199 TestMountStart/serial/DeleteFirst 2.31
200 TestMountStart/serial/VerifyMountPostDelete 0.28
201 TestMountStart/serial/Stop 2.2
202 TestMountStart/serial/RestartStopped 17.13
203 TestMountStart/serial/VerifyMountPostStop 0.28
214 TestMultiNode/serial/RestartKeepsNodes 61.75
222 TestPreload 160.75
224 TestScheduledStopUnix 106.36
225 TestSkaffold 112.16
228 TestRunningBinaryUpgrade 172.63
230 TestKubernetesUpgrade 170.62
243 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.11
244 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.02
245 TestStoppedBinaryUpgrade/Setup 2.52
246 TestStoppedBinaryUpgrade/Upgrade 163.64
248 TestPause/serial/Start 52.8
249 TestStoppedBinaryUpgrade/MinikubeLogs 2.9
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.44
259 TestNoKubernetes/serial/StartWithK8s 38.41
260 TestPause/serial/SecondStartNoReconfiguration 40.25
261 TestNoKubernetes/serial/StartWithStopK8s 17.21
262 TestNoKubernetes/serial/Start 19.08
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.11
264 TestNoKubernetes/serial/ProfileList 0.74
265 TestPause/serial/Pause 0.61
266 TestNoKubernetes/serial/Stop 2.28
267 TestPause/serial/VerifyStatus 0.14
268 TestPause/serial/Unpause 0.47
269 TestPause/serial/PauseAgain 0.52
270 TestPause/serial/DeletePaused 5.25
271 TestNoKubernetes/serial/StartNoArgs 15.92
272 TestPause/serial/VerifyDeletedResources 7.25
273 TestNetworkPlugins/group/auto/Start 88.11
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.11
275 TestNetworkPlugins/group/kindnet/Start 69.14
276 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
277 TestNetworkPlugins/group/kindnet/KubeletFlags 0.14
278 TestNetworkPlugins/group/kindnet/NetCatPod 13.19
279 TestNetworkPlugins/group/auto/KubeletFlags 0.14
280 TestNetworkPlugins/group/auto/NetCatPod 13.2
281 TestNetworkPlugins/group/kindnet/DNS 0.13
282 TestNetworkPlugins/group/kindnet/Localhost 0.13
283 TestNetworkPlugins/group/kindnet/HairPin 0.11
284 TestNetworkPlugins/group/auto/DNS 0.13
285 TestNetworkPlugins/group/auto/Localhost 0.1
286 TestNetworkPlugins/group/auto/HairPin 0.11
287 TestNetworkPlugins/group/calico/Start 74.14
288 TestNetworkPlugins/group/custom-flannel/Start 68.72
289 TestNetworkPlugins/group/calico/ControllerPod 5.01
290 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.14
291 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.2
292 TestNetworkPlugins/group/calico/KubeletFlags 0.15
293 TestNetworkPlugins/group/calico/NetCatPod 13.26
294 TestNetworkPlugins/group/custom-flannel/DNS 0.14
295 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
296 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
297 TestNetworkPlugins/group/calico/DNS 0.13
298 TestNetworkPlugins/group/calico/Localhost 0.14
299 TestNetworkPlugins/group/calico/HairPin 0.1
300 TestNetworkPlugins/group/false/Start 60.45
301 TestNetworkPlugins/group/enable-default-cni/Start 59.08
302 TestNetworkPlugins/group/false/KubeletFlags 0.15
303 TestNetworkPlugins/group/false/NetCatPod 13.22
304 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.14
305 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.21
306 TestNetworkPlugins/group/false/DNS 0.12
307 TestNetworkPlugins/group/false/Localhost 0.1
308 TestNetworkPlugins/group/false/HairPin 0.11
309 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
310 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
311 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
312 TestNetworkPlugins/group/flannel/Start 59.72
313 TestNetworkPlugins/group/bridge/Start 95.53
314 TestNetworkPlugins/group/flannel/ControllerPod 5.01
315 TestNetworkPlugins/group/flannel/KubeletFlags 0.14
316 TestNetworkPlugins/group/flannel/NetCatPod 12.28
317 TestNetworkPlugins/group/flannel/DNS 0.12
318 TestNetworkPlugins/group/flannel/Localhost 0.1
319 TestNetworkPlugins/group/flannel/HairPin 0.1
320 TestNetworkPlugins/group/kubenet/Start 57.55
321 TestNetworkPlugins/group/bridge/KubeletFlags 0.13
322 TestNetworkPlugins/group/bridge/NetCatPod 14.24
323 TestNetworkPlugins/group/bridge/DNS 0.15
324 TestNetworkPlugins/group/bridge/Localhost 0.12
325 TestNetworkPlugins/group/bridge/HairPin 0.14
327 TestStartStop/group/old-k8s-version/serial/FirstStart 162.03
328 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
329 TestNetworkPlugins/group/kubenet/NetCatPod 12.31
330 TestNetworkPlugins/group/kubenet/DNS 0.13
331 TestNetworkPlugins/group/kubenet/Localhost 0.1
332 TestNetworkPlugins/group/kubenet/HairPin 0.11
334 TestStartStop/group/no-preload/serial/FirstStart 69.16
335 TestStartStop/group/no-preload/serial/DeployApp 9.28
336 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.78
337 TestStartStop/group/no-preload/serial/Stop 8.27
338 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
339 TestStartStop/group/no-preload/serial/SecondStart 298.69
340 TestStartStop/group/old-k8s-version/serial/DeployApp 10.32
341 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.7
342 TestStartStop/group/old-k8s-version/serial/Stop 8.24
343 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
344 TestStartStop/group/old-k8s-version/serial/SecondStart 501.98
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
346 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.17
348 TestStartStop/group/no-preload/serial/Pause 1.8
350 TestStartStop/group/embed-certs/serial/FirstStart 89.77
351 TestStartStop/group/embed-certs/serial/DeployApp 9.27
352 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.77
353 TestStartStop/group/embed-certs/serial/Stop 8.24
354 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
355 TestStartStop/group/embed-certs/serial/SecondStart 299.63
356 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
357 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
358 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.16
359 TestStartStop/group/old-k8s-version/serial/Pause 1.71
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.51
362 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
364 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.26
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
366 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 299.54
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
369 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
370 TestStartStop/group/embed-certs/serial/Pause 1.82
372 TestStartStop/group/newest-cni/serial/FirstStart 49.26
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
375 TestStartStop/group/newest-cni/serial/Stop 8.28
376 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
377 TestStartStop/group/newest-cni/serial/SecondStart 38.87
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.16
381 TestStartStop/group/newest-cni/serial/Pause 1.74
382 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
383 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
384 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.16
385 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.75
x
+
TestDownloadOnly/v1.16.0/json-events (23.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-973000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-973000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (23.789459714s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (23.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-973000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-973000: exit status 85 (296.550713ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-973000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |          |
	|         | -p download-only-973000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:21:13
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:21:13.045781    1684 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:21:13.045998    1684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:13.046004    1684 out.go:309] Setting ErrFile to fd 2...
	I0610 09:21:13.046022    1684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:13.046131    1684 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	W0610 09:21:13.046236    1684 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16578-1235/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16578-1235/.minikube/config/config.json: no such file or directory
	I0610 09:21:13.047734    1684 out.go:303] Setting JSON to true
	I0610 09:21:13.069429    1684 start.go:127] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1243,"bootTime":1686412830,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0610 09:21:13.069509    1684 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:21:13.090936    1684 out.go:97] [download-only-973000] minikube v1.30.1 on Darwin 13.4
	I0610 09:21:13.112712    1684 out.go:169] MINIKUBE_LOCATION=16578
	I0610 09:21:13.091188    1684 notify.go:220] Checking for updates...
	W0610 09:21:13.091235    1684 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 09:21:13.154758    1684 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:21:13.175886    1684 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 09:21:13.196830    1684 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:21:13.217941    1684 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	W0610 09:21:13.259565    1684 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 09:21:13.259985    1684 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:21:13.346695    1684 out.go:97] Using the hyperkit driver based on user configuration
	I0610 09:21:13.346770    1684 start.go:297] selected driver: hyperkit
	I0610 09:21:13.346782    1684 start.go:875] validating driver "hyperkit" against <nil>
	I0610 09:21:13.346898    1684 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:13.347284    1684 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16578-1235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 09:21:13.489661    1684 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0610 09:21:13.493639    1684 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:21:13.493664    1684 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 09:21:13.493742    1684 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:21:13.498091    1684 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0610 09:21:13.498241    1684 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 09:21:13.498266    1684 cni.go:84] Creating CNI manager for ""
	I0610 09:21:13.498279    1684 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 09:21:13.498285    1684 start_flags.go:319] config:
	{Name:download-only-973000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-973000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:21:13.498550    1684 iso.go:125] acquiring lock: {Name:mkc028968ad126cece35ec994c5f11699b30bc34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:13.520392    1684 out.go:97] Downloading VM boot image ...
	I0610 09:21:13.520555    1684 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/iso/amd64/minikube-v1.30.1-1686096373-16019-amd64.iso
	I0610 09:21:22.074415    1684 out.go:97] Starting control plane node download-only-973000 in cluster download-only-973000
	I0610 09:21:22.074512    1684 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 09:21:22.215382    1684 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0610 09:21:22.215426    1684 cache.go:57] Caching tarball of preloaded images
	I0610 09:21:22.215786    1684 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 09:21:22.237269    1684 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0610 09:21:22.237353    1684 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0610 09:21:22.441066    1684 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0610 09:21:32.147435    1684 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0610 09:21:32.147712    1684 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0610 09:21:32.688546    1684 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 09:21:32.688799    1684 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/download-only-973000/config.json ...
	I0610 09:21:32.688825    1684 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/download-only-973000/config.json: {Name:mkaa5bc6937ed8d1ddb89855b971314a532b9faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:21:32.689092    1684 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 09:21:32.689355    1684 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-973000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (20.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-973000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-973000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=hyperkit : (20.220437051s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (20.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
--- PASS: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-973000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-973000: exit status 85 (280.010796ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-973000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |          |
	|         | -p download-only-973000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-973000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |          |
	|         | -p download-only-973000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:21:37
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:21:37.134328    1700 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:21:37.134541    1700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:37.134547    1700 out.go:309] Setting ErrFile to fd 2...
	I0610 09:21:37.134570    1700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:37.134689    1700 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	W0610 09:21:37.134780    1700 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16578-1235/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16578-1235/.minikube/config/config.json: no such file or directory
	I0610 09:21:37.136351    1700 out.go:303] Setting JSON to true
	I0610 09:21:37.155396    1700 start.go:127] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1267,"bootTime":1686412830,"procs":394,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0610 09:21:37.155489    1700 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:21:37.178374    1700 out.go:97] [download-only-973000] minikube v1.30.1 on Darwin 13.4
	I0610 09:21:37.178619    1700 notify.go:220] Checking for updates...
	I0610 09:21:37.199608    1700 out.go:169] MINIKUBE_LOCATION=16578
	I0610 09:21:37.220474    1700 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:21:37.241498    1700 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 09:21:37.262628    1700 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:21:37.283476    1700 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	W0610 09:21:37.326681    1700 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 09:21:37.327367    1700 config.go:182] Loaded profile config "download-only-973000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0610 09:21:37.327448    1700 start.go:783] api.Load failed for download-only-973000: filestore "download-only-973000": Docker machine "download-only-973000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0610 09:21:37.327590    1700 driver.go:375] Setting default libvirt URI to qemu:///system
	W0610 09:21:37.327629    1700 start.go:783] api.Load failed for download-only-973000: filestore "download-only-973000": Docker machine "download-only-973000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0610 09:21:37.355594    1700 out.go:97] Using the hyperkit driver based on existing profile
	I0610 09:21:37.355677    1700 start.go:297] selected driver: hyperkit
	I0610 09:21:37.355689    1700 start.go:875] validating driver "hyperkit" against &{Name:download-only-973000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-973000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:21:37.355948    1700 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:37.356153    1700 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16578-1235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 09:21:37.364209    1700 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
	I0610 09:21:37.367657    1700 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:21:37.367681    1700 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 09:21:37.370020    1700 cni.go:84] Creating CNI manager for ""
	I0610 09:21:37.370040    1700 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:21:37.370051    1700 start_flags.go:319] config:
	{Name:download-only-973000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-973000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:21:37.370191    1700 iso.go:125] acquiring lock: {Name:mkc028968ad126cece35ec994c5f11699b30bc34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:37.391423    1700 out.go:97] Starting control plane node download-only-973000 in cluster download-only-973000
	I0610 09:21:37.391463    1700 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:21:37.494156    1700 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0610 09:21:37.494213    1700 cache.go:57] Caching tarball of preloaded images
	I0610 09:21:37.494566    1700 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:21:37.515978    1700 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0610 09:21:37.516025    1700 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 ...
	I0610 09:21:37.719873    1700 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4?checksum=md5:1858f4460df043b5f83c3d1ea676dbc0 -> /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0610 09:21:53.103025    1700 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 ...
	I0610 09:21:53.103203    1700 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 ...
	I0610 09:21:53.697536    1700 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:21:53.697635    1700 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/download-only-973000/config.json ...
	I0610 09:21:53.698022    1700 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:21:53.698277    1700 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/16578-1235/.minikube/cache/darwin/amd64/v1.27.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-973000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-973000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestBinaryMirror (0.96s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-937000 --alsologtostderr --binary-mirror http://127.0.0.1:49339 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-937000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-937000
--- PASS: TestBinaryMirror (0.96s)

                                                
                                    
x
+
TestOffline (55.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-046000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-046000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (50.497505027s)
helpers_test.go:175: Cleaning up "offline-docker-046000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-046000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-046000: (5.314337831s)
--- PASS: TestOffline (55.81s)

                                                
                                    
x
+
TestAddons/Setup (201.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-200000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-200000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m21.527897194s)
--- PASS: TestAddons/Setup (201.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 8.497849ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-8hfh6" [c8dc4122-54d9-489f-87b3-f33b205a9a4b] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008592075s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wxn7w" [b5610002-4404-49b5-b3c5-bcecd787981e] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009906442s
addons_test.go:316: (dbg) Run:  kubectl --context addons-200000 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-200000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-200000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.584971979s)
addons_test.go:335: (dbg) Run:  out/minikube-darwin-amd64 -p addons-200000 ip
2023/06/10 09:25:35 [DEBUG] GET http://192.168.64.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p addons-200000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-200000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-200000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-200000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [df24109b-ace7-4c41-9f16-43022080abf2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [df24109b-ace7-4c41-9f16-43022080abf2] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007477942s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p addons-200000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-200000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-amd64 -p addons-200000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.64.2
addons_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p addons-200000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 -p addons-200000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-amd64 -p addons-200000 addons disable ingress --alsologtostderr -v=1: (7.491412275s)
--- PASS: TestAddons/parallel/Ingress (20.44s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.41s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-g8s52" [a7a061aa-9e99-4cfd-a6b9-96e87c5436a4] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00763403s
addons_test.go:817: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-200000
addons_test.go:817: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-200000: (5.399867165s)
--- PASS: TestAddons/parallel/InspektorGadget (10.41s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.155148ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-9nz67" [01907bfb-5d57-420e-80df-58d7fc93d2da] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009731895s
addons_test.go:391: (dbg) Run:  kubectl --context addons-200000 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p addons-200000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.44s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.9s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 3.865067ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-l4bbx" [b5c4020d-64e4-4a2f-bf92-54d36ced664d] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008665872s
addons_test.go:449: (dbg) Run:  kubectl --context addons-200000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-200000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.480009329s)
addons_test.go:454: kubectl --context addons-200000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:449: (dbg) Run:  kubectl --context addons-200000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-200000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.468929584s)
addons_test.go:466: (dbg) Run:  out/minikube-darwin-amd64 -p addons-200000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 3.699113ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-200000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-200000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5b3d7174-2ce6-4e8e-880e-68208c85e8df] Pending
helpers_test.go:344: "task-pv-pod" [5b3d7174-2ce6-4e8e-880e-68208c85e8df] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5b3d7174-2ce6-4e8e-880e-68208c85e8df] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.007798357s
addons_test.go:560: (dbg) Run:  kubectl --context addons-200000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-200000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-200000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-200000 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-200000 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-200000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-200000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-200000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4ebeb2b2-beb4-46d4-ba71-c7ccabf13528] Pending
helpers_test.go:344: "task-pv-pod-restore" [4ebeb2b2-beb4-46d4-ba71-c7ccabf13528] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4ebeb2b2-beb4-46d4-ba71-c7ccabf13528] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.009962824s
addons_test.go:602: (dbg) Run:  kubectl --context addons-200000 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-200000 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-200000 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-darwin-amd64 -p addons-200000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-darwin-amd64 -p addons-200000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.334816025s)
addons_test.go:618: (dbg) Run:  out/minikube-darwin-amd64 -p addons-200000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-200000 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-200000 --alsologtostderr -v=1: (1.267534485s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-stmpt" [432f1a96-6cc7-45fb-96bf-54b4952bafc1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-stmpt" [432f1a96-6cc7-45fb-96bf-54b4952bafc1] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.011380068s
--- PASS: TestAddons/parallel/Headlamp (13.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-fb67554b8-kfpnp" [a814b65c-7a7d-4a92-9b57-64d0c3ccf117] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008521266s
addons_test.go:836: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-200000
--- PASS: TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-200000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-200000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.67s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-200000
addons_test.go:148: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-200000: (5.221770866s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-200000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-200000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-200000
--- PASS: TestAddons/StoppedEnableDisable (5.67s)

                                                
                                    
x
+
TestCertOptions (43.83s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-629000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-629000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (38.229532183s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-629000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-629000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-629000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-629000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-629000: (5.27529328s)
--- PASS: TestCertOptions (43.83s)

                                                
                                    
x
+
TestCertExpiration (242.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-507000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-507000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (34.085246566s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-507000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0610 09:55:43.775960    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-507000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (23.022850539s)
helpers_test.go:175: Cleaning up "cert-expiration-507000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-507000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-507000: (5.297920056s)
--- PASS: TestCertExpiration (242.41s)

                                                
                                    
x
+
TestDockerFlags (46.48s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-107000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0610 09:51:45.372751    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-107000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (42.763115798s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-107000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-107000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-107000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-107000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-107000: (3.420771282s)
--- PASS: TestDockerFlags (46.48s)

                                                
                                    
x
+
TestForceSystemdFlag (41.84s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-021000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-021000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (36.262829061s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-021000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-021000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-021000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-021000: (5.391311656s)
--- PASS: TestForceSystemdFlag (41.84s)

                                                
                                    
x
+
TestForceSystemdEnv (39.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-067000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-067000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (35.41105129s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-067000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-067000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-067000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-067000: (3.417551634s)
--- PASS: TestForceSystemdEnv (39.05s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.03s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.03s)

                                                
                                    
x
+
TestErrorSpam/setup (33.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-869000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-869000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 --driver=hyperkit : (33.919401529s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2."
--- PASS: TestErrorSpam/setup (33.92s)

                                                
                                    
x
+
TestErrorSpam/start (1.26s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 start --dry-run
--- PASS: TestErrorSpam/start (1.26s)

                                                
                                    
x
+
TestErrorSpam/status (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 status
--- PASS: TestErrorSpam/status (0.42s)

                                                
                                    
x
+
TestErrorSpam/pause (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 pause
--- PASS: TestErrorSpam/pause (1.16s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 unpause
--- PASS: TestErrorSpam/unpause (1.22s)

                                                
                                    
x
+
TestErrorSpam/stop (5.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 stop: (5.217916613s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-869000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-869000 stop
--- PASS: TestErrorSpam/stop (5.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /Users/jenkins/minikube-integration/16578-1235/.minikube/files/etc/test/nested/copy/1682/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-222000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2229: (dbg) Done: out/minikube-darwin-amd64 start -p functional-222000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (51.022614239s)
--- PASS: TestFunctional/serial/StartWithProxy (51.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-222000 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-darwin-amd64 start -p functional-222000 --alsologtostderr -v=8: (37.111952836s)
functional_test.go:658: soft start took 37.112574509s for "functional-222000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-222000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 cache add registry.k8s.io/pause:3.1: (2.316695659s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 cache add registry.k8s.io/pause:3.3: (2.261155985s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 cache add registry.k8s.io/pause:latest: (1.871284724s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1851665315/001
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 cache add minikube-local-cache-test:functional-222000
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 cache delete minikube-local-cache-test:functional-222000
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-222000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-222000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (132.555831ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 cache reload: (1.194194612s)
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 kubectl -- --context functional-222000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-222000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-222000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-darwin-amd64 start -p functional-222000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.4910907s)
functional_test.go:756: restart took 39.491256967s for "functional-222000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-222000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 logs
functional_test.go:1231: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 logs: (2.924577775s)
--- PASS: TestFunctional/serial/LogsCmd (2.92s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1410148076/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1410148076/001/logs.txt: (2.651642648s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-222000 config get cpus: exit status 14 (40.319449ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-222000 config get cpus: exit status 14 (39.951299ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-222000 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-222000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2887: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.61s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-222000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-222000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (500.518298ms)

                                                
                                                
-- stdout --
	* [functional-222000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:31:20.234779    2834 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:31:20.234958    2834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:31:20.234965    2834 out.go:309] Setting ErrFile to fd 2...
	I0610 09:31:20.234969    2834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:31:20.235086    2834 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:31:20.236515    2834 out.go:303] Setting JSON to false
	I0610 09:31:20.256084    2834 start.go:127] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1850,"bootTime":1686412830,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0610 09:31:20.256176    2834 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:31:20.277888    2834 out.go:177] * [functional-222000] minikube v1.30.1 on Darwin 13.4
	I0610 09:31:20.322066    2834 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:31:20.322053    2834 notify.go:220] Checking for updates...
	I0610 09:31:20.365662    2834 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:31:20.386884    2834 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 09:31:20.407946    2834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:31:20.454017    2834 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	I0610 09:31:20.475976    2834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:31:20.497593    2834 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:31:20.498314    2834 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:31:20.498356    2834 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:31:20.506012    2834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50409
	I0610 09:31:20.506351    2834 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:31:20.506802    2834 main.go:141] libmachine: Using API Version  1
	I0610 09:31:20.506813    2834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:31:20.507047    2834 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:31:20.507171    2834 main.go:141] libmachine: (functional-222000) Calling .DriverName
	I0610 09:31:20.507333    2834 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:31:20.507580    2834 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:31:20.507622    2834 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:31:20.514196    2834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50411
	I0610 09:31:20.514517    2834 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:31:20.514847    2834 main.go:141] libmachine: Using API Version  1
	I0610 09:31:20.514862    2834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:31:20.515072    2834 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:31:20.515160    2834 main.go:141] libmachine: (functional-222000) Calling .DriverName
	I0610 09:31:20.542810    2834 out.go:177] * Using the hyperkit driver based on existing profile
	I0610 09:31:20.584577    2834 start.go:297] selected driver: hyperkit
	I0610 09:31:20.584594    2834 start.go:875] validating driver "hyperkit" against &{Name:functional-222000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-222000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:31:20.584698    2834 start.go:886] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:31:20.608872    2834 out.go:177] 
	W0610 09:31:20.629880    2834 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0610 09:31:20.651242    2834 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-222000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-222000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-222000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (753.980408ms)

                                                
                                                
-- stdout --
	* [functional-222000] minikube v1.30.1 sur Darwin 13.4
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:31:20.926225    2847 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:31:20.926609    2847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:31:20.926624    2847 out.go:309] Setting ErrFile to fd 2...
	I0610 09:31:20.926633    2847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:31:20.926915    2847 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
	I0610 09:31:20.948094    2847 out.go:303] Setting JSON to false
	I0610 09:31:20.969342    2847 start.go:127] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1850,"bootTime":1686412830,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0610 09:31:20.969424    2847 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:31:21.005672    2847 out.go:177] * [functional-222000] minikube v1.30.1 sur Darwin 13.4
	I0610 09:31:21.063752    2847 notify.go:220] Checking for updates...
	I0610 09:31:21.105585    2847 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:31:21.147634    2847 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	I0610 09:31:21.189642    2847 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 09:31:21.252511    2847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:31:21.315498    2847 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	I0610 09:31:21.357470    2847 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:31:21.378927    2847 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:31:21.379292    2847 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:31:21.379335    2847 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:31:21.386331    2847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50426
	I0610 09:31:21.386703    2847 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:31:21.387124    2847 main.go:141] libmachine: Using API Version  1
	I0610 09:31:21.387152    2847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:31:21.387358    2847 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:31:21.387464    2847 main.go:141] libmachine: (functional-222000) Calling .DriverName
	I0610 09:31:21.387652    2847 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:31:21.387895    2847 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 09:31:21.387917    2847 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 09:31:21.394604    2847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50428
	I0610 09:31:21.394935    2847 main.go:141] libmachine: () Calling .GetVersion
	I0610 09:31:21.395329    2847 main.go:141] libmachine: Using API Version  1
	I0610 09:31:21.395345    2847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 09:31:21.395552    2847 main.go:141] libmachine: () Calling .GetMachineName
	I0610 09:31:21.395650    2847 main.go:141] libmachine: (functional-222000) Calling .DriverName
	I0610 09:31:21.422391    2847 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0610 09:31:21.464575    2847 start.go:297] selected driver: hyperkit
	I0610 09:31:21.464600    2847 start.go:875] validating driver "hyperkit" against &{Name:functional-222000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-222000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 09:31:21.464802    2847 start.go:886] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:31:21.527634    2847 out.go:177] 
	W0610 09:31:21.570569    2847 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0610 09:31:21.591415    2847 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 status
functional_test.go:855: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-222000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-222000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-wkngl" [e5034c89-c363-422f-87a3-e4385c7d52c9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0610 09:31:02.184009    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-6fb669fc84-wkngl" [e5034c89-c363-422f-87a3-e4385c7d52c9] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.015439146s
functional_test.go:1647: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.64.4:31537
functional_test.go:1673: http://192.168.64.4:31537: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-wkngl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.64.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.64.4:31537
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [78854f21-fd8c-4b82-8bd1-0f6e7d5d6c3a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00628032s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-222000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-222000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-222000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-222000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0728b42f-7bc7-4cf6-83f5-a2f5a9da1439] Pending
helpers_test.go:344: "sp-pod" [0728b42f-7bc7-4cf6-83f5-a2f5a9da1439] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0728b42f-7bc7-4cf6-83f5-a2f5a9da1439] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.008387235s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-222000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-222000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-222000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a6a1c4ff-64cd-4de0-92a3-cd441b5bf834] Pending
helpers_test.go:344: "sp-pod" [a6a1c4ff-64cd-4de0-92a3-cd441b5bf834] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a6a1c4ff-64cd-4de0-92a3-cd441b5bf834] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008519849s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-222000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh -n functional-222000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 cp functional-222000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd1525586892/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh -n functional-222000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-222000 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
E0610 09:30:22.499989    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
helpers_test.go:344: "mysql-7db894d786-jv5rn" [18b5aa69-f388-43a7-b76f-9d8c15f3f28e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0610 09:30:23.781258    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
helpers_test.go:344: "mysql-7db894d786-jv5rn" [18b5aa69-f388-43a7-b76f-9d8c15f3f28e] Running
E0610 09:30:41.703746    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.028790315s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-222000 exec mysql-7db894d786-jv5rn -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-222000 exec mysql-7db894d786-jv5rn -- mysql -ppassword -e "show databases;": exit status 1 (135.286274ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-222000 exec mysql-7db894d786-jv5rn -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-222000 exec mysql-7db894d786-jv5rn -- mysql -ppassword -e "show databases;": exit status 1 (102.379279ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-222000 exec mysql-7db894d786-jv5rn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/1682/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "sudo cat /etc/test/nested/copy/1682/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/1682.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "sudo cat /etc/ssl/certs/1682.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/1682.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "sudo cat /usr/share/ca-certificates/1682.pem"
E0610 09:30:21.158586    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 09:30:21.166961    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 09:30:21.177809    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 09:30:21.199072    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "sudo cat /etc/ssl/certs/51391683.0"
E0610 09:30:21.242088    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 09:30:21.323104    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
functional_test.go:1994: Checking for existence of /etc/ssl/certs/16822.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "sudo cat /etc/ssl/certs/16822.pem"
E0610 09:30:21.485692    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/16822.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "sudo cat /usr/share/ca-certificates/16822.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0610 09:30:21.807688    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CertSync (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-222000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-222000 ssh "sudo systemctl is-active crio": exit status 1 (120.357715ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-222000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-222000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-222000
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-222000 image ls --format short --alsologtostderr:
I0610 09:31:22.761189    2882 out.go:296] Setting OutFile to fd 1 ...
I0610 09:31:22.780435    2882 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:31:22.780455    2882 out.go:309] Setting ErrFile to fd 2...
I0610 09:31:22.780464    2882 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:31:22.780707    2882 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
I0610 09:31:22.802741    2882 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:31:22.802934    2882 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:31:22.803546    2882 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 09:31:22.803615    2882 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 09:31:22.811141    2882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50454
I0610 09:31:22.811527    2882 main.go:141] libmachine: () Calling .GetVersion
I0610 09:31:22.811983    2882 main.go:141] libmachine: Using API Version  1
I0610 09:31:22.811996    2882 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 09:31:22.812229    2882 main.go:141] libmachine: () Calling .GetMachineName
I0610 09:31:22.812339    2882 main.go:141] libmachine: (functional-222000) Calling .GetState
I0610 09:31:22.812415    2882 main.go:141] libmachine: (functional-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 09:31:22.812500    2882 main.go:141] libmachine: (functional-222000) DBG | hyperkit pid from json: 2094
I0610 09:31:22.813770    2882 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 09:31:22.813795    2882 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 09:31:22.820925    2882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50456
I0610 09:31:22.821338    2882 main.go:141] libmachine: () Calling .GetVersion
I0610 09:31:22.821750    2882 main.go:141] libmachine: Using API Version  1
I0610 09:31:22.821769    2882 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 09:31:22.822010    2882 main.go:141] libmachine: () Calling .GetMachineName
I0610 09:31:22.822138    2882 main.go:141] libmachine: (functional-222000) Calling .DriverName
I0610 09:31:22.822367    2882 ssh_runner.go:195] Run: systemctl --version
I0610 09:31:22.822387    2882 main.go:141] libmachine: (functional-222000) Calling .GetSSHHostname
I0610 09:31:22.822487    2882 main.go:141] libmachine: (functional-222000) Calling .GetSSHPort
I0610 09:31:22.822578    2882 main.go:141] libmachine: (functional-222000) Calling .GetSSHKeyPath
I0610 09:31:22.822670    2882 main.go:141] libmachine: (functional-222000) Calling .GetSSHUsername
I0610 09:31:22.822776    2882 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/functional-222000/id_rsa Username:docker}
I0610 09:31:22.875949    2882 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0610 09:31:22.898860    2882 main.go:141] libmachine: Making call to close driver server
I0610 09:31:22.898871    2882 main.go:141] libmachine: (functional-222000) Calling .Close
I0610 09:31:22.899015    2882 main.go:141] libmachine: Successfully made call to close driver server
I0610 09:31:22.899026    2882 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 09:31:22.899034    2882 main.go:141] libmachine: Making call to close driver server
I0610 09:31:22.899039    2882 main.go:141] libmachine: (functional-222000) Calling .Close
I0610 09:31:22.899061    2882 main.go:141] libmachine: (functional-222000) DBG | Closing plugin on server side
I0610 09:31:22.899223    2882 main.go:141] libmachine: Successfully made call to close driver server
I0610 09:31:22.899235    2882 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 09:31:22.899269    2882 main.go:141] libmachine: (functional-222000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-222000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | f9c14fe76d502 | 143MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-222000 | 69ec06288feea | 1.24MB |
| docker.io/library/nginx                     | alpine            | fe7edaf8a8dcf | 41.4MB |
| registry.k8s.io/kube-apiserver              | v1.27.2           | c5b13e4f7806d | 121MB  |
| gcr.io/google-containers/addon-resizer      | functional-222000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/etcd                        | 3.5.7-0           | 86b6af7dd652c | 296MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.27.2           | ac2b7465ebba9 | 112MB  |
| docker.io/library/mysql                     | 5.7               | dd6675b5cfea1 | 569MB  |
| registry.k8s.io/kube-proxy                  | v1.27.2           | b8aa50768fd67 | 71.1MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-222000 | 07b93eea0a667 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.27.2           | 89e70da428d29 | 58.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-222000 image ls --format table --alsologtostderr:
I0610 09:31:26.551091    2910 out.go:296] Setting OutFile to fd 1 ...
I0610 09:31:26.551276    2910 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:31:26.551283    2910 out.go:309] Setting ErrFile to fd 2...
I0610 09:31:26.551287    2910 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:31:26.551403    2910 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
I0610 09:31:26.551994    2910 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:31:26.552082    2910 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:31:26.552462    2910 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 09:31:26.552501    2910 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 09:31:26.559149    2910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50512
I0610 09:31:26.559530    2910 main.go:141] libmachine: () Calling .GetVersion
I0610 09:31:26.560012    2910 main.go:141] libmachine: Using API Version  1
I0610 09:31:26.560025    2910 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 09:31:26.560239    2910 main.go:141] libmachine: () Calling .GetMachineName
I0610 09:31:26.560347    2910 main.go:141] libmachine: (functional-222000) Calling .GetState
I0610 09:31:26.560444    2910 main.go:141] libmachine: (functional-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 09:31:26.560510    2910 main.go:141] libmachine: (functional-222000) DBG | hyperkit pid from json: 2094
I0610 09:31:26.561724    2910 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 09:31:26.561745    2910 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 09:31:26.568481    2910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50514
I0610 09:31:26.568811    2910 main.go:141] libmachine: () Calling .GetVersion
I0610 09:31:26.569156    2910 main.go:141] libmachine: Using API Version  1
I0610 09:31:26.569175    2910 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 09:31:26.569407    2910 main.go:141] libmachine: () Calling .GetMachineName
I0610 09:31:26.569514    2910 main.go:141] libmachine: (functional-222000) Calling .DriverName
I0610 09:31:26.569669    2910 ssh_runner.go:195] Run: systemctl --version
I0610 09:31:26.569691    2910 main.go:141] libmachine: (functional-222000) Calling .GetSSHHostname
I0610 09:31:26.569768    2910 main.go:141] libmachine: (functional-222000) Calling .GetSSHPort
I0610 09:31:26.569837    2910 main.go:141] libmachine: (functional-222000) Calling .GetSSHKeyPath
I0610 09:31:26.569923    2910 main.go:141] libmachine: (functional-222000) Calling .GetSSHUsername
I0610 09:31:26.570001    2910 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/functional-222000/id_rsa Username:docker}
I0610 09:31:26.618402    2910 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0610 09:31:26.636890    2910 main.go:141] libmachine: Making call to close driver server
I0610 09:31:26.636900    2910 main.go:141] libmachine: (functional-222000) Calling .Close
I0610 09:31:26.637034    2910 main.go:141] libmachine: (functional-222000) DBG | Closing plugin on server side
I0610 09:31:26.637062    2910 main.go:141] libmachine: Successfully made call to close driver server
I0610 09:31:26.637071    2910 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 09:31:26.637078    2910 main.go:141] libmachine: Making call to close driver server
I0610 09:31:26.637084    2910 main.go:141] libmachine: (functional-222000) Calling .Close
I0610 09:31:26.637692    2910 main.go:141] libmachine: Successfully made call to close driver server
I0610 09:31:26.637768    2910 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 09:31:26.637778    2910 main.go:141] libmachine: (functional-222000) DBG | Closing plugin on server side
2023/06/10 09:31:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-222000 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"fe7edaf8a8dcf9af72f49cf0a0219e3ace17667bafc537f0d4a0ab1bd7f10467","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41400000"},{"id":"89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"58400000"},{"id":"ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"112000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"296000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30
775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-222000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"07b93eea0a66736a09c3361f1d7534092e3265a4ff280468da3645ce4af84ad5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-222000"],"size":"30"},{"id":"f9c14fe76d502861ba0939bc3189e642c02e257f06f4c0214b1f8ca329326cda","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"143000000"},{"id":"c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"121000000"},{"id":"b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","repoD
igests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"71100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"dd6675b5cfea17abb655ea8229cbcfa5db9d0b041f839db0c24228c2e18a4bdf","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"569000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"69ec06288feea49572ae6d4a5040da1e07bed6180882d02dca8492f63bd8fc61","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-222000"],"size":"1240000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"4
3800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-222000 image ls --format json --alsologtostderr:
I0610 09:31:26.390241    2906 out.go:296] Setting OutFile to fd 1 ...
I0610 09:31:26.390426    2906 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:31:26.390433    2906 out.go:309] Setting ErrFile to fd 2...
I0610 09:31:26.390437    2906 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:31:26.390546    2906 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
I0610 09:31:26.391127    2906 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:31:26.391212    2906 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:31:26.391539    2906 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 09:31:26.391586    2906 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 09:31:26.398291    2906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50506
I0610 09:31:26.398667    2906 main.go:141] libmachine: () Calling .GetVersion
I0610 09:31:26.399096    2906 main.go:141] libmachine: Using API Version  1
I0610 09:31:26.399108    2906 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 09:31:26.399307    2906 main.go:141] libmachine: () Calling .GetMachineName
I0610 09:31:26.399408    2906 main.go:141] libmachine: (functional-222000) Calling .GetState
I0610 09:31:26.399486    2906 main.go:141] libmachine: (functional-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 09:31:26.399557    2906 main.go:141] libmachine: (functional-222000) DBG | hyperkit pid from json: 2094
I0610 09:31:26.400786    2906 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 09:31:26.400807    2906 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 09:31:26.407541    2906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50508
I0610 09:31:26.407872    2906 main.go:141] libmachine: () Calling .GetVersion
I0610 09:31:26.408201    2906 main.go:141] libmachine: Using API Version  1
I0610 09:31:26.408214    2906 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 09:31:26.408447    2906 main.go:141] libmachine: () Calling .GetMachineName
I0610 09:31:26.408558    2906 main.go:141] libmachine: (functional-222000) Calling .DriverName
I0610 09:31:26.408690    2906 ssh_runner.go:195] Run: systemctl --version
I0610 09:31:26.408708    2906 main.go:141] libmachine: (functional-222000) Calling .GetSSHHostname
I0610 09:31:26.408782    2906 main.go:141] libmachine: (functional-222000) Calling .GetSSHPort
I0610 09:31:26.408850    2906 main.go:141] libmachine: (functional-222000) Calling .GetSSHKeyPath
I0610 09:31:26.408940    2906 main.go:141] libmachine: (functional-222000) Calling .GetSSHUsername
I0610 09:31:26.409021    2906 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/functional-222000/id_rsa Username:docker}
I0610 09:31:26.466256    2906 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0610 09:31:26.483689    2906 main.go:141] libmachine: Making call to close driver server
I0610 09:31:26.483698    2906 main.go:141] libmachine: (functional-222000) Calling .Close
I0610 09:31:26.483845    2906 main.go:141] libmachine: (functional-222000) DBG | Closing plugin on server side
I0610 09:31:26.483849    2906 main.go:141] libmachine: Successfully made call to close driver server
I0610 09:31:26.483857    2906 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 09:31:26.483865    2906 main.go:141] libmachine: Making call to close driver server
I0610 09:31:26.483872    2906 main.go:141] libmachine: (functional-222000) Calling .Close
I0610 09:31:26.484509    2906 main.go:141] libmachine: Successfully made call to close driver server
I0610 09:31:26.484544    2906 main.go:141] libmachine: (functional-222000) DBG | Closing plugin on server side
I0610 09:31:26.484562    2906 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-222000 image ls --format yaml --alsologtostderr:
- id: dd6675b5cfea17abb655ea8229cbcfa5db9d0b041f839db0c24228c2e18a4bdf
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "569000000"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "296000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-222000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 07b93eea0a66736a09c3361f1d7534092e3265a4ff280468da3645ce4af84ad5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-222000
size: "30"
- id: f9c14fe76d502861ba0939bc3189e642c02e257f06f4c0214b1f8ca329326cda
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "143000000"
- id: 89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "58400000"
- id: ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "112000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: fe7edaf8a8dcf9af72f49cf0a0219e3ace17667bafc537f0d4a0ab1bd7f10467
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41400000"
- id: c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "121000000"
- id: b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "71100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-222000 image ls --format yaml --alsologtostderr:
I0610 09:31:22.965228    2888 out.go:296] Setting OutFile to fd 1 ...
I0610 09:31:22.965409    2888 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:31:22.965414    2888 out.go:309] Setting ErrFile to fd 2...
I0610 09:31:22.965418    2888 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:31:22.965527    2888 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
I0610 09:31:22.966111    2888 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:31:22.966196    2888 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:31:22.967354    2888 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 09:31:22.967596    2888 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 09:31:22.974573    2888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50475
I0610 09:31:22.974969    2888 main.go:141] libmachine: () Calling .GetVersion
I0610 09:31:22.975397    2888 main.go:141] libmachine: Using API Version  1
I0610 09:31:22.975406    2888 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 09:31:22.975653    2888 main.go:141] libmachine: () Calling .GetMachineName
I0610 09:31:22.975773    2888 main.go:141] libmachine: (functional-222000) Calling .GetState
I0610 09:31:22.975855    2888 main.go:141] libmachine: (functional-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 09:31:22.975924    2888 main.go:141] libmachine: (functional-222000) DBG | hyperkit pid from json: 2094
I0610 09:31:22.977193    2888 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 09:31:22.977217    2888 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 09:31:22.984038    2888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50477
I0610 09:31:22.984394    2888 main.go:141] libmachine: () Calling .GetVersion
I0610 09:31:22.984772    2888 main.go:141] libmachine: Using API Version  1
I0610 09:31:22.984786    2888 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 09:31:22.984992    2888 main.go:141] libmachine: () Calling .GetMachineName
I0610 09:31:22.985084    2888 main.go:141] libmachine: (functional-222000) Calling .DriverName
I0610 09:31:22.985237    2888 ssh_runner.go:195] Run: systemctl --version
I0610 09:31:22.985255    2888 main.go:141] libmachine: (functional-222000) Calling .GetSSHHostname
I0610 09:31:22.985335    2888 main.go:141] libmachine: (functional-222000) Calling .GetSSHPort
I0610 09:31:22.985415    2888 main.go:141] libmachine: (functional-222000) Calling .GetSSHKeyPath
I0610 09:31:22.985488    2888 main.go:141] libmachine: (functional-222000) Calling .GetSSHUsername
I0610 09:31:22.985577    2888 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/functional-222000/id_rsa Username:docker}
I0610 09:31:23.048175    2888 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0610 09:31:23.067377    2888 main.go:141] libmachine: Making call to close driver server
I0610 09:31:23.067387    2888 main.go:141] libmachine: (functional-222000) Calling .Close
I0610 09:31:23.067535    2888 main.go:141] libmachine: (functional-222000) DBG | Closing plugin on server side
I0610 09:31:23.067548    2888 main.go:141] libmachine: Successfully made call to close driver server
I0610 09:31:23.067556    2888 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 09:31:23.067565    2888 main.go:141] libmachine: Making call to close driver server
I0610 09:31:23.067571    2888 main.go:141] libmachine: (functional-222000) Calling .Close
I0610 09:31:23.067717    2888 main.go:141] libmachine: Successfully made call to close driver server
I0610 09:31:23.067728    2888 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 09:31:23.067735    2888 main.go:141] libmachine: (functional-222000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-222000 ssh pgrep buildkitd: exit status 1 (122.196385ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image build -t localhost/my-image:functional-222000 testdata/build --alsologtostderr
functional_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 image build -t localhost/my-image:functional-222000 testdata/build --alsologtostderr: (2.890971831s)
functional_test.go:318: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-222000 image build -t localhost/my-image:functional-222000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d55571f7b756
Removing intermediate container d55571f7b756
---> e83887ac1f61
Step 3/3 : ADD content.txt /
---> 69ec06288fee
Successfully built 69ec06288fee
Successfully tagged localhost/my-image:functional-222000
functional_test.go:321: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-222000 image build -t localhost/my-image:functional-222000 testdata/build --alsologtostderr:
I0610 09:31:23.256007    2897 out.go:296] Setting OutFile to fd 1 ...
I0610 09:31:23.256275    2897 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:31:23.256282    2897 out.go:309] Setting ErrFile to fd 2...
I0610 09:31:23.256286    2897 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:31:23.256395    2897 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1235/.minikube/bin
I0610 09:31:23.256980    2897 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:31:23.257598    2897 config.go:182] Loaded profile config "functional-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:31:23.257956    2897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 09:31:23.257991    2897 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 09:31:23.264699    2897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50491
I0610 09:31:23.265084    2897 main.go:141] libmachine: () Calling .GetVersion
I0610 09:31:23.265510    2897 main.go:141] libmachine: Using API Version  1
I0610 09:31:23.265519    2897 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 09:31:23.265724    2897 main.go:141] libmachine: () Calling .GetMachineName
I0610 09:31:23.265835    2897 main.go:141] libmachine: (functional-222000) Calling .GetState
I0610 09:31:23.265917    2897 main.go:141] libmachine: (functional-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 09:31:23.265981    2897 main.go:141] libmachine: (functional-222000) DBG | hyperkit pid from json: 2094
I0610 09:31:23.267186    2897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 09:31:23.267213    2897 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 09:31:23.273845    2897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50493
I0610 09:31:23.274176    2897 main.go:141] libmachine: () Calling .GetVersion
I0610 09:31:23.274518    2897 main.go:141] libmachine: Using API Version  1
I0610 09:31:23.274529    2897 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 09:31:23.274738    2897 main.go:141] libmachine: () Calling .GetMachineName
I0610 09:31:23.274834    2897 main.go:141] libmachine: (functional-222000) Calling .DriverName
I0610 09:31:23.274977    2897 ssh_runner.go:195] Run: systemctl --version
I0610 09:31:23.274995    2897 main.go:141] libmachine: (functional-222000) Calling .GetSSHHostname
I0610 09:31:23.275082    2897 main.go:141] libmachine: (functional-222000) Calling .GetSSHPort
I0610 09:31:23.275162    2897 main.go:141] libmachine: (functional-222000) Calling .GetSSHKeyPath
I0610 09:31:23.275249    2897 main.go:141] libmachine: (functional-222000) Calling .GetSSHUsername
I0610 09:31:23.275347    2897 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1235/.minikube/machines/functional-222000/id_rsa Username:docker}
I0610 09:31:23.323107    2897 build_images.go:151] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3780119720.tar
I0610 09:31:23.323186    2897 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0610 09:31:23.330567    2897 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3780119720.tar
I0610 09:31:23.334008    2897 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3780119720.tar: stat -c "%s %y" /var/lib/minikube/build/build.3780119720.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3780119720.tar': No such file or directory
I0610 09:31:23.334034    2897 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3780119720.tar --> /var/lib/minikube/build/build.3780119720.tar (3072 bytes)
I0610 09:31:23.351737    2897 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3780119720
I0610 09:31:23.359887    2897 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3780119720 -xf /var/lib/minikube/build/build.3780119720.tar
I0610 09:31:23.366467    2897 docker.go:336] Building image: /var/lib/minikube/build/build.3780119720
I0610 09:31:23.366533    2897 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-222000 /var/lib/minikube/build/build.3780119720
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0610 09:31:26.154157    2897 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-222000 /var/lib/minikube/build/build.3780119720: (2.697182066s)
I0610 09:31:26.154231    2897 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3780119720
I0610 09:31:26.161054    2897 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3780119720.tar
I0610 09:31:26.167543    2897 build_images.go:207] Built localhost/my-image:functional-222000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3780119720.tar
I0610 09:31:26.167566    2897 build_images.go:123] succeeded building to: functional-222000
I0610 09:31:26.167575    2897 build_images.go:124] failed building to: 
I0610 09:31:26.167588    2897 main.go:141] libmachine: Making call to close driver server
I0610 09:31:26.167595    2897 main.go:141] libmachine: (functional-222000) Calling .Close
I0610 09:31:26.167739    2897 main.go:141] libmachine: Successfully made call to close driver server
I0610 09:31:26.167749    2897 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 09:31:26.167756    2897 main.go:141] libmachine: Making call to close driver server
I0610 09:31:26.167758    2897 main.go:141] libmachine: (functional-222000) DBG | Closing plugin on server side
I0610 09:31:26.167762    2897 main.go:141] libmachine: (functional-222000) Calling .Close
I0610 09:31:26.167889    2897 main.go:141] libmachine: (functional-222000) DBG | Closing plugin on server side
I0610 09:31:26.167937    2897 main.go:141] libmachine: Successfully made call to close driver server
I0610 09:31:26.167955    2897 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.210256721s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-222000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-222000 docker-env) && out/minikube-darwin-amd64 status -p functional-222000"
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-222000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image load --daemon gcr.io/google-containers/addon-resizer:functional-222000 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 image load --daemon gcr.io/google-containers/addon-resizer:functional-222000 --alsologtostderr: (2.974258921s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image load --daemon gcr.io/google-containers/addon-resizer:functional-222000 --alsologtostderr
E0610 09:30:26.341453    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
functional_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 image load --daemon gcr.io/google-containers/addon-resizer:functional-222000 --alsologtostderr: (1.967121234s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.741039111s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-222000
functional_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image load --daemon gcr.io/google-containers/addon-resizer:functional-222000 --alsologtostderr
E0610 09:30:31.463343    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
functional_test.go:243: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 image load --daemon gcr.io/google-containers/addon-resizer:functional-222000 --alsologtostderr: (2.967394068s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image save gcr.io/google-containers/addon-resizer:functional-222000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:378: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 image save gcr.io/google-containers/addon-resizer:functional-222000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.193452864s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image rm gcr.io/google-containers/addon-resizer:functional-222000 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.13096369s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-222000
functional_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 image save --daemon gcr.io/google-containers/addon-resizer:functional-222000 --alsologtostderr
functional_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p functional-222000 image save --daemon gcr.io/google-containers/addon-resizer:functional-222000 --alsologtostderr: (2.260679346s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-222000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-222000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-222000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-qxwxq" [42f3edfc-0ef1-468f-8875-ed8215680d83] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-qxwxq" [42f3edfc-0ef1-468f-8875-ed8215680d83] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.008116409s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-222000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-222000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-222000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-222000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2590: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-222000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-222000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c5808971-91ea-4f69-ae46-c46850769c06] Pending
helpers_test.go:344: "nginx-svc" [c5808971-91ea-4f69-ae46-c46850769c06] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c5808971-91ea-4f69-ae46-c46850769c06] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.007165642s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 service list -o json
functional_test.go:1492: Took "381.625922ms" to run "out/minikube-darwin-amd64 -p functional-222000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.64.4:30905
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.64.4:30905
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-222000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.17.139 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-222000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1313: Took "193.000829ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1327: Took "64.436981ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1364: Took "195.769611ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1377: Took "63.487913ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2544378146/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1686414670453962000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2544378146/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1686414670453962000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2544378146/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1686414670453962000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2544378146/001/test-1686414670453962000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (135.142336ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 10 16:31 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 10 16:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 10 16:31 test-1686414670453962000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh cat /mount-9p/test-1686414670453962000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-222000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0b3b4e98-39ae-4c72-b9f3-d63e3986e604] Pending
helpers_test.go:344: "busybox-mount" [0b3b4e98-39ae-4c72-b9f3-d63e3986e604] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0b3b4e98-39ae-4c72-b9f3-d63e3986e604] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0b3b4e98-39ae-4c72-b9f3-d63e3986e604] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.008602784s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-222000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2544378146/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1568217889/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (134.498755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1568217889/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-222000 ssh "sudo umount -f /mount-9p": exit status 1 (119.763566ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-222000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1568217889/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1523822712/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1523822712/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1523822712/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T" /mount1: exit status 1 (152.895443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-222000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-222000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1523822712/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1523822712/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-222000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1523822712/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-222000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-222000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-222000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (39.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-512000 --driver=hyperkit 
E0610 09:31:43.234632    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-512000 --driver=hyperkit : (39.01807076s)
--- PASS: TestImageBuild/serial/Setup (39.02s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-512000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-512000: (2.22272882s)
--- PASS: TestImageBuild/serial/NormalBuild (2.22s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-512000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-512000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.21s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-512000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (95.09s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-946000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E0610 09:33:05.156220    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-946000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m35.092734665s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (95.09s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-946000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-946000 addons enable ingress --alsologtostderr -v=5: (18.220863315s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-946000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (30.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-946000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-946000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.264107899s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-946000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-946000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5ac77d29-fc4f-43d3-b2c4-9a4e48d6b340] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5ac77d29-fc4f-43d3-b2c4-9a4e48d6b340] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.010579763s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-946000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-946000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-946000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.64.6
addons_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-946000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-946000 addons disable ingress-dns --alsologtostderr -v=1: (2.50225866s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-946000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-946000 addons disable ingress --alsologtostderr -v=1: (7.300440259s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (30.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-290000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0610 09:35:21.248649    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 09:35:22.320437    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:22.325547    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:22.337564    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:22.358578    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:22.400096    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:22.481052    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:22.641736    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:22.963516    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:23.603735    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:24.885331    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:27.446530    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:32.566690    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:42.808145    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:35:48.996195    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-290000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (52.182626854s)
--- PASS: TestJSONOutput/start/Command (52.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-290000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-290000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-290000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-290000 --output=json --user=testUser: (8.159985641s)
--- PASS: TestJSONOutput/stop/Command (8.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.69s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-220000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-220000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (340.94899ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"88b65d1a-85d0-4c7e-baa9-a000737e362c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-220000] minikube v1.30.1 on Darwin 13.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"71c61774-282f-4c97-8979-b4bba00b51ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16578"}}
	{"specversion":"1.0","id":"4645852d-5982-4cde-8f2d-1d70cbd2b8ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig"}}
	{"specversion":"1.0","id":"d43ea47a-746a-4c11-a644-b011ae7a3d8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"3f5b34a1-5954-4435-b38c-42fa6dd459f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"069a25db-abc6-4df6-9892-991643e9e0d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube"}}
	{"specversion":"1.0","id":"6303ce51-5eb8-435e-8492-125291f3a27e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7320735a-fc8f-4912-9420-ab554cf68cd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-220000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-220000
--- PASS: TestErrorJSONOutput (0.69s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (84.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-366000 --driver=hyperkit 
E0610 09:36:03.290071    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-366000 --driver=hyperkit : (36.550023832s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-369000 --driver=hyperkit 
E0610 09:36:44.251945    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-369000 --driver=hyperkit : (36.180042744s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-366000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-369000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-369000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-369000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-369000: (5.243750587s)
helpers_test.go:175: Cleaning up "first-366000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-366000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-366000: (5.286196049s)
--- PASS: TestMinikubeProfile (84.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-711000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-711000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (18.304199136s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-711000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-711000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (18.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-721000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-721000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (17.958958804s)
--- PASS: TestMountStart/serial/StartWithMountSecond (18.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-721000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-721000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.31s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-711000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-711000 --alsologtostderr -v=5: (2.312143631s)
--- PASS: TestMountStart/serial/DeleteFirst (2.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-721000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-721000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-721000
E0610 09:38:06.172164    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-721000: (2.197164127s)
--- PASS: TestMountStart/serial/Stop (2.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (17.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-721000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-721000: (16.125823227s)
--- PASS: TestMountStart/serial/RestartStopped (17.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-721000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-721000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (61.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-826000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-826000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-826000: (8.235834025s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-826000 --wait=true -v=8 --alsologtostderr
E0610 09:40:21.248140    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 09:40:22.320914    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:40:42.724854    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:40:50.012885    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-826000 --wait=true -v=8 --alsologtostderr: (53.434846027s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-826000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (61.75s)

                                                
                                    
x
+
TestPreload (160.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-090000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0610 09:44:20.788048    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:44:48.486480    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 09:45:21.245484    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 09:45:22.318460    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-090000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m16.538514401s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-090000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-090000 image pull gcr.io/k8s-minikube/busybox: (2.342210517s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-090000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-090000: (8.240044352s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-090000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0610 09:46:44.354085    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-090000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m8.23753325s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-090000 image list
helpers_test.go:175: Cleaning up "test-preload-090000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-090000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-090000: (5.246368076s)
--- PASS: TestPreload (160.75s)

                                                
                                    
x
+
TestScheduledStopUnix (106.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-048000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-048000 --memory=2048 --driver=hyperkit : (35.020813918s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-048000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-048000 -n scheduled-stop-048000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-048000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-048000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-048000 -n scheduled-stop-048000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-048000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-048000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-048000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-048000: exit status 7 (53.279039ms)

                                                
                                                
-- stdout --
	scheduled-stop-048000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-048000 -n scheduled-stop-048000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-048000 -n scheduled-stop-048000: exit status 7 (51.555559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-048000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-048000
--- PASS: TestScheduledStopUnix (106.36s)

                                                
                                    
x
+
TestSkaffold (112.16s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3085416452 version
skaffold_test.go:63: skaffold version: v2.5.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-533000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-533000 --memory=2600 --driver=hyperkit : (34.680274963s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3085416452 run --minikube-profile skaffold-533000 --kube-context skaffold-533000 --status-check=true --port-forward=false --interactive=false
E0610 09:49:20.785586    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3085416452 run --minikube-profile skaffold-533000 --kube-context skaffold-533000 --status-check=true --port-forward=false --interactive=false: (58.318774688s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6fbcd487f8-mdtxl" [32babc5a-5d9b-46e2-9c1b-2d39fe8b46ba] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.010782964s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5d8dbd6446-7tt9h" [b9bcebee-2833-4f55-ae30-83a44b19a461] Running
E0610 09:50:21.244368    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 09:50:22.318463    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005817424s
helpers_test.go:175: Cleaning up "skaffold-533000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-533000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-533000: (5.248178961s)
--- PASS: TestSkaffold (112.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (172.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.503082828.exe start -p running-upgrade-036000 --memory=2200 --vm-driver=hyperkit 
E0610 09:54:20.719945    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.503082828.exe start -p running-upgrade-036000 --memory=2200 --vm-driver=hyperkit : (1m38.212493126s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-036000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0610 09:55:14.468960    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:14.475190    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:14.487372    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:14.508057    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:14.549701    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:14.629823    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:14.791412    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:15.111578    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:15.752717    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:17.033537    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:19.594219    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:21.175689    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 09:55:22.249839    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 09:55:24.714299    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 09:55:34.954366    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
version_upgrade_test.go:142: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-036000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m5.708173966s)
helpers_test.go:175: Cleaning up "running-upgrade-036000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-036000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-036000: (5.266230351s)
--- PASS: TestRunningBinaryUpgrade (172.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (170.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-639000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
E0610 09:55:55.435479    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-639000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m21.225324934s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-639000
version_upgrade_test.go:239: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-639000: (8.235709583s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-639000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-639000 status --format={{.Host}}: exit status 7 (51.570395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-639000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:255: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-639000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=hyperkit : (34.07907772s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-639000 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-639000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-639000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (384.657125ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-639000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-639000
	    minikube start -p kubernetes-upgrade-639000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6390002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.2, by running:
	    
	    minikube start -p kubernetes-upgrade-639000 --kubernetes-version=v1.27.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-639000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=hyperkit 
E0610 09:57:58.314503    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
version_upgrade_test.go:287: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-639000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=hyperkit : (41.349870137s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-639000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-639000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-639000: (5.246367722s)
--- PASS: TestKubernetesUpgrade (170.62s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin
- MINIKUBE_LOCATION=16578
- KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2810821334/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2810821334/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2810821334/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2810821334/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.02s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin
- MINIKUBE_LOCATION=16578
- KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current38218172/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current38218172/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current38218172/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current38218172/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (163.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1198153605.exe start -p stopped-upgrade-997000 --memory=2200 --vm-driver=hyperkit 
E0610 09:56:36.395098    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1198153605.exe start -p stopped-upgrade-997000 --memory=2200 --vm-driver=hyperkit : (1m32.590852632s)
version_upgrade_test.go:204: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1198153605.exe -p stopped-upgrade-997000 stop
version_upgrade_test.go:204: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1198153605.exe -p stopped-upgrade-997000 stop: (8.06997965s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-997000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:210: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-997000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m2.98112209s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (163.64s)

                                                
                                    
x
+
TestPause/serial/Start (52.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-581000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-581000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (52.803409725s)
--- PASS: TestPause/serial/Start (52.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-997000
version_upgrade_test.go:218: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-997000: (2.897426452s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-151000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-151000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (439.614099ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-151000] minikube v1.30.1 on Darwin 13.4
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-151000 --driver=hyperkit 
E0610 09:59:20.708436    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-151000 --driver=hyperkit : (38.258017275s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-151000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.25s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-581000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-581000 --alsologtostderr -v=1 --driver=hyperkit : (40.235154536s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-151000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-151000 --no-kubernetes --driver=hyperkit : (14.660655409s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-151000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-151000 status -o json: exit status 2 (129.205071ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-151000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-151000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-151000: (2.421044512s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-151000 --no-kubernetes --driver=hyperkit 
E0610 10:00:14.459087    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-151000 --no-kubernetes --driver=hyperkit : (19.080204246s)
--- PASS: TestNoKubernetes/serial/Start (19.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-151000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-151000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (110.152468ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-581000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-151000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-151000: (2.284450828s)
--- PASS: TestNoKubernetes/serial/Stop (2.28s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-581000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-581000 --output=json --layout=cluster: exit status 2 (137.472688ms)

                                                
                                                
-- stdout --
	{"Name":"pause-581000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-581000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.47s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-581000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.47s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.52s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-581000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.52s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-581000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-581000 --alsologtostderr -v=5: (5.253623143s)
--- PASS: TestPause/serial/DeletePaused (5.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (15.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-151000 --driver=hyperkit 
E0610 10:00:21.165713    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 10:00:22.238103    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-151000 --driver=hyperkit : (15.915228016s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (15.92s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (7.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (7.253607985s)
--- PASS: TestPause/serial/VerifyDeletedResources (7.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m28.110604916s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-151000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-151000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (112.738232ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
E0610 10:00:42.150175    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m9.144742738s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wzfvs" [dc0bab76-57e9-4c5a-8c91-0490bce96194] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.011286858s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-021000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-021000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-qbdjc" [d49e5fb0-c82f-4f85-bbb1-da6c6e075cd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-qbdjc" [d49e5fb0-c82f-4f85-bbb1-da6c6e075cd2] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004388869s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-021000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-021000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-rgndx" [a60ca4ea-8f37-4c79-8e67-3e94bebf9cf7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-rgndx" [a60ca4ea-8f37-4c79-8e67-3e94bebf9cf7] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.006293934s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-021000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-021000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m14.144784927s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
E0610 10:03:24.270141    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m8.717360766s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5tlht" [359f33b3-06a1-4eb0-8469-c869ad96cff0] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.011965516s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-021000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-021000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-72h26" [538ce314-3695-4eea-92a2-28ce5148c04e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-72h26" [538ce314-3695-4eea-92a2-28ce5148c04e] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006271744s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-021000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-021000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-kszkj" [20185e64-12fc-4fc3-8803-8b2a76ac635c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-kszkj" [20185e64-12fc-4fc3-8803-8b2a76ac635c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005893206s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-021000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-021000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (60.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (1m0.448904877s)
--- PASS: TestNetworkPlugins/group/false/Start (60.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
E0610 10:04:20.698859    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (59.083732543s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-021000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-021000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-rvr4x" [1dff5daf-5777-4517-9222-fb66bf41e3ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-rvr4x" [1dff5daf-5777-4517-9222-fb66bf41e3ac] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.007461641s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-021000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-021000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-tmmkn" [8eda97b3-6709-4c80-bed0-947194e45c28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0610 10:05:14.447276    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-tmmkn" [8eda97b3-6709-4c80-bed0-947194e45c28] Running
E0610 10:05:21.156324    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.005992069s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-021000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-021000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (59.718017299s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m35.527168387s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8vtxt" [48175747-e2b7-4a4a-a847-dca23dbd149b] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.011439168s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-021000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-021000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-tm5j6" [69455096-ce1f-4eba-a3c3-7c2f125ef591] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0610 10:06:47.122478    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:06:47.127581    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:06:47.138168    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:06:47.158245    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:06:47.199560    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:06:47.280642    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:06:47.441240    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:06:47.761993    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:06:48.403400    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-tm5j6" [69455096-ce1f-4eba-a3c3-7c2f125ef591] Running
E0610 10:06:49.684915    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:06:52.246610    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004855202s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-021000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (57.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (57.552042158s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (57.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-021000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-021000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-sn6v6" [af8ceaba-164e-4d68-a48a-1e9dbb344d14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0610 10:07:20.436623    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/auto-021000/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-sn6v6" [af8ceaba-164e-4d68-a48a-1e9dbb344d14] Running
E0610 10:07:28.108263    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.005031443s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-021000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-366000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0610 10:08:09.067547    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-366000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m42.034446778s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-021000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-021000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-d6ttc" [bc861426-1f71-409a-b6a4-1d59cfd9e062] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-d6ttc" [bc861426-1f71-409a-b6a4-1d59cfd9e062] Running
E0610 10:08:21.876656    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/auto-021000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.005638963s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-021000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-021000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)
E0610 10:24:20.803233    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 10:24:49.292385    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:24:59.973342    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
E0610 10:25:02.324023    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
E0610 10:25:05.393166    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 10:25:08.517144    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:25:12.547112    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:25:14.553685    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 10:25:16.980039    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:25:21.260443    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 10:25:22.333902    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 10:25:31.735782    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-194000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.27.2
E0610 10:08:40.423887    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
E0610 10:08:41.704408    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
E0610 10:08:41.932523    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
E0610 10:08:44.265872    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
E0610 10:08:47.053187    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
E0610 10:08:49.386055    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
E0610 10:08:57.294945    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
E0610 10:08:59.627108    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
E0610 10:09:17.775205    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
E0610 10:09:20.108200    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
E0610 10:09:20.689021    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 10:09:30.985480    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:09:43.794612    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/auto-021000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-194000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.27.2: (1m9.155499424s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-194000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [77bf637d-5bae-442d-af53-c9107f52c6bb] Pending
helpers_test.go:344: "busybox" [77bf637d-5bae-442d-af53-c9107f52c6bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [77bf637d-5bae-442d-af53-c9107f52c6bb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.020437624s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-194000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-194000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0610 10:09:58.734810    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-194000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-194000 --alsologtostderr -v=3
E0610 10:10:01.067644    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-194000 --alsologtostderr -v=3: (8.271717128s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-194000 -n no-preload-194000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-194000 -n no-preload-194000: exit status 7 (52.560936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-194000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (298.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-194000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.27.2
E0610 10:10:08.401794    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:08.408109    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:08.419007    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:08.439240    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:08.481332    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:08.561476    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:08.721675    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:09.043511    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:09.684286    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:10.964755    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:12.431361    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:12.436783    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:12.448338    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:12.469347    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:12.509454    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:12.590758    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:12.752169    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:13.072533    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:13.525706    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:13.712824    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:14.438758    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 10:10:14.994094    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:17.554953    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:18.646654    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:10:21.145243    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 10:10:22.218749    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
E0610 10:10:22.677014    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:10:28.886532    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-194000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.27.2: (4m58.540899964s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-194000 -n no-preload-194000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (298.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-366000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2bbff04b-1ce3-49e5-abb1-139b26ac769a] Pending
helpers_test.go:344: "busybox" [2bbff04b-1ce3-49e5-abb1-139b26ac769a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0610 10:10:32.917002    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2bbff04b-1ce3-49e5-abb1-139b26ac769a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.017817732s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-366000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-366000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-366000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-366000 --alsologtostderr -v=3
E0610 10:10:49.366041    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-366000 --alsologtostderr -v=3: (8.241252558s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-366000 -n old-k8s-version-366000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-366000 -n old-k8s-version-366000: exit status 7 (52.355901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-366000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (501.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-366000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0610 10:10:53.397029    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:11:20.652222    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
E0610 10:11:22.986497    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
E0610 10:11:30.325634    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:11:34.356887    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:11:37.488528    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 10:11:38.102685    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:38.108537    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:38.118768    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:38.139500    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:38.181464    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:38.261729    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:38.422374    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:38.742808    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:39.383588    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:40.664310    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:43.224513    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:47.113060    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:11:48.344657    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:58.585083    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:11:59.937517    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/auto-021000/client.crt: no such file or directory
E0610 10:12:14.821310    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:12:18.145200    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:18.150346    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:18.160451    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:18.180769    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:18.221585    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:18.302133    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:18.462370    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:18.783166    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:19.066504    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:12:19.424455    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:20.706051    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:23.266630    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:23.743851    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 10:12:27.629329    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/auto-021000/client.crt: no such file or directory
E0610 10:12:28.386670    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:38.627445    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:12:52.243099    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:12:56.276394    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:12:59.107275    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:13:00.026240    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:13:10.812937    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:10.818869    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:10.829575    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:10.850675    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:10.891601    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:10.972774    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:11.133937    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:11.454537    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:12.095509    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:13.375872    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:15.936561    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:21.057117    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:31.298322    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:13:36.785258    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
E0610 10:13:39.128723    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
E0610 10:13:40.066162    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:13:51.777866    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:14:04.488978    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
E0610 10:14:06.823176    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
E0610 10:14:20.679085    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0610 10:14:21.944872    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:14:32.755956    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:15:01.983614    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-366000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (8m21.833432968s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-366000 -n old-k8s-version-366000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (501.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-df48f" [f4d54960-2d22-44ce-9a9e-58b59b3b489a] Running
E0610 10:15:08.391652    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011222765s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-df48f" [f4d54960-2d22-44ce-9a9e-58b59b3b489a] Running
E0610 10:15:12.420636    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:15:14.427303    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006410838s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-194000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-194000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-194000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-194000 -n no-preload-194000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-194000 -n no-preload-194000: exit status 2 (147.703962ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-194000 -n no-preload-194000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-194000 -n no-preload-194000: exit status 2 (148.652359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-194000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-194000 -n no-preload-194000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-194000 -n no-preload-194000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-623000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.27.2
E0610 10:15:36.079867    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:15:40.113018    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:15:54.675017    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:16:38.092107    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:16:47.102593    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-623000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.27.2: (1m29.768254454s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-623000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [73fb6491-8e7e-4b3a-ac95-06163b8beafe] Pending
helpers_test.go:344: "busybox" [73fb6491-8e7e-4b3a-ac95-06163b8beafe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [73fb6491-8e7e-4b3a-ac95-06163b8beafe] Running
E0610 10:16:59.928419    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/auto-021000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013424228s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-623000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-623000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-623000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-623000 --alsologtostderr -v=3
E0610 10:17:05.781752    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-623000 --alsologtostderr -v=3: (8.235315603s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-623000 -n embed-certs-623000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-623000 -n embed-certs-623000: exit status 7 (52.422741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-623000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (299.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-623000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.27.2
E0610 10:17:18.134568    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:17:45.818281    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
E0610 10:18:10.803381    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:18:36.775757    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
E0610 10:18:38.510323    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:18:39.119742    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-623000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.27.2: (4m59.471567439s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-623000 -n embed-certs-623000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (299.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-qc7wv" [115a3573-32ef-44f2-96eb-7daf8208b6e8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012735967s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-qc7wv" [115a3573-32ef-44f2-96eb-7daf8208b6e8] Running
E0610 10:19:20.670098    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0056841s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-366000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-366000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-366000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-366000 -n old-k8s-version-366000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-366000 -n old-k8s-version-366000: exit status 2 (143.689301ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-366000 -n old-k8s-version-366000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-366000 -n old-k8s-version-366000: exit status 2 (144.404184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-366000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-366000 -n old-k8s-version-366000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-366000 -n old-k8s-version-366000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-645000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.27.2
E0610 10:19:49.156662    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:49.162623    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:49.173929    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:49.196038    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:49.237163    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:49.317338    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:49.477773    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:49.797849    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:50.439487    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:51.720433    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:54.281173    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:19:59.401223    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:20:04.236883    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
E0610 10:20:08.382899    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/false-021000/client.crt: no such file or directory
E0610 10:20:09.641131    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:20:12.410610    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/enable-default-cni-021000/client.crt: no such file or directory
E0610 10:20:14.417516    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/skaffold-533000/client.crt: no such file or directory
E0610 10:20:21.126278    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/addons-200000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-645000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.27.2: (50.505590231s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-645000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [016c7bdf-0a77-412a-b206-157c0ed00590] Pending
E0610 10:20:22.198718    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/functional-222000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [016c7bdf-0a77-412a-b206-157c0ed00590] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [016c7bdf-0a77-412a-b206-157c0ed00590] Running
E0610 10:20:30.120666    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.012429827s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-645000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-645000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0610 10:20:31.599386    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:20:31.604745    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:20:31.614878    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:20:31.635275    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:20:31.675474    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:20:31.756263    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:20:31.916809    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-645000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-645000 --alsologtostderr -v=3
E0610 10:20:32.237874    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:20:32.878312    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:20:34.159092    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:20:36.719703    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-645000 --alsologtostderr -v=3: (8.256837296s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (51.613077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-645000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-645000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.27.2
E0610 10:20:41.841093    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:20:52.081094    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:21:11.080446    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:21:12.562058    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:21:38.222238    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/flannel-021000/client.crt: no such file or directory
E0610 10:21:47.233794    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:21:53.663658    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
E0610 10:22:00.059340    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/auto-021000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-645000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.27.2: (4m59.39012777s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xqz9k" [ce440466-753c-4bbf-a69f-7493821b57ab] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009000469s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xqz9k" [ce440466-753c-4bbf-a69f-7493821b57ab] Running
E0610 10:22:18.266147    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/bridge-021000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005995667s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-623000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-623000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-623000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-623000 -n embed-certs-623000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-623000 -n embed-certs-623000: exit status 2 (144.16953ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-623000 -n embed-certs-623000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-623000 -n embed-certs-623000: exit status 2 (148.49404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-623000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-623000 -n embed-certs-623000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-623000 -n embed-certs-623000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-873000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.27.2
E0610 10:22:33.140002    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/no-preload-194000/client.crt: no such file or directory
E0610 10:23:10.303131    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kindnet-021000/client.crt: no such file or directory
E0610 10:23:10.935646    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/kubenet-021000/client.crt: no such file or directory
E0610 10:23:15.582980    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/old-k8s-version-366000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-873000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.27.2: (49.256439755s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-873000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-873000 --alsologtostderr -v=3
E0610 10:23:23.111754    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/auto-021000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-873000 --alsologtostderr -v=3: (8.282428074s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-873000 -n newest-cni-873000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-873000 -n newest-cni-873000: exit status 7 (50.523188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-873000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-873000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.27.2
E0610 10:23:36.909271    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/calico-021000/client.crt: no such file or directory
E0610 10:23:39.251741    1682 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/custom-flannel-021000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-873000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.27.2: (38.716396894s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-873000 -n newest-cni-873000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-873000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-873000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-873000 -n newest-cni-873000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-873000 -n newest-cni-873000: exit status 2 (144.578234ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-873000 -n newest-cni-873000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-873000 -n newest-cni-873000: exit status 2 (146.266337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-873000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-873000 -n newest-cni-873000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-873000 -n newest-cni-873000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-c99fw" [8950d3f1-a90c-4f8a-963f-c7ad19a31663] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012024764s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-c99fw" [8950d3f1-a90c-4f8a-963f-c7ad19a31663] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006042533s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-645000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-645000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-645000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 2 (141.238985ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 2 (144.345215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-645000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.75s)

                                                
                                    

Test skip (19/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-021000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/jenkins/minikube-integration/16578-1235/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 Jun 2023 09:43:15 PDT
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.64.13:8443
name: multinode-826000-m01
contexts:
- context:
cluster: multinode-826000-m01
extensions:
- extension:
last-update: Sat, 10 Jun 2023 09:43:15 PDT
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: multinode-826000-m01
name: multinode-826000-m01
current-context: ""
kind: Config
preferences: {}
users:
- name: multinode-826000-m01
user:
client-certificate: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m01/client.crt
client-key: /Users/jenkins/minikube-integration/16578-1235/.minikube/profiles/multinode-826000-m01/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                
----------------------- debugLogs end: cilium-021000 [took: 5.012078613s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-021000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-021000
--- SKIP: TestNetworkPlugins/group/cilium (5.40s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-932000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-932000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.36s)

                                                
                                    
Copied to clipboard